Class Sequel::Dataset
In: lib/sequel/dataset.rb
lib/sequel/extensions/implicit_subquery.rb
lib/sequel/extensions/synchronize_sql.rb
lib/sequel/extensions/null_dataset.rb
lib/sequel/extensions/auto_literal_strings.rb
lib/sequel/extensions/query.rb
lib/sequel/extensions/pagination.rb
lib/sequel/extensions/round_timestamps.rb
lib/sequel/extensions/dataset_source_alias.rb
lib/sequel/extensions/split_array_nil.rb
lib/sequel/adapters/mysql2.rb
lib/sequel/adapters/mysql.rb
lib/sequel/adapters/utils/stored_procedures.rb
lib/sequel/adapters/utils/replace.rb
lib/sequel/dataset/misc.rb
lib/sequel/dataset/actions.rb
lib/sequel/dataset/dataset_module.rb
lib/sequel/dataset/prepared_statements.rb
lib/sequel/dataset/sql.rb
lib/sequel/dataset/placeholder_literalizer.rb
lib/sequel/dataset/graph.rb
lib/sequel/dataset/query.rb
lib/sequel/dataset/features.rb
Parent: Object

A dataset represents an SQL query. Datasets can be used to select, insert, update and delete records.

Query results are always retrieved on demand, so a dataset can be kept around and reused indefinitely (datasets never cache results):

  my_posts = DB[:posts].where(author: 'david') # no records are retrieved
  my_posts.all # records are retrieved
  my_posts.all # records are retrieved again

Datasets are frozen and use a functional style where modification methods return modified copies of the the dataset. This allows you to reuse datasets:

  posts = DB[:posts]
  davids_posts = posts.where(author: 'david')
  old_posts = posts.where{stamp < Date.today - 7}
  davids_old_posts = davids_posts.where{stamp < Date.today - 7}

Datasets are Enumerable objects, so they can be manipulated using any of the Enumerable methods, such as map, inject, etc.

For more information, see the "Dataset Basics" guide.

Methods

<<   ==   []   _columns   _import   _select_map_multiple   _select_map_single   add_graph_aliases   aliased_expression_sql_append   all   array_sql_append   as_hash   avg   bind   boolean_constant_sql_append   cache_get   cache_set   call   case_expression_sql_append   cast_sql_append   clause_methods   clear_columns_cache   clone   column_all_sql_append   columns   columns!   complex_expression_sql_append   compound_clone   compound_from_self   constant_sql_append   count   current_datetime   def_sql_method   delayed_evaluation_sql_append   delete   distinct   dup   each   each_server   empty?   eql?   escape_like   except   exclude   exclude_having   exists   extension   fetch_rows   fetch_rows   filter   first   first!   first_source   first_source_alias   first_source_table   for_update   freeze   from   from_self   function_sql_append   get   graph   graph   grep   group   group_and_count   group_append   group_by   group_cube   group_rollup   grouping_sets   hash   having   import   insert   insert_sql   inspect   intersect   invert   join   join_clause_sql_append   join_on_clause_sql_append   join_table   join_using_clause_sql_append   joined_dataset?   last   lateral   limit   literal_append   lock_style   map   max   min   multi_insert   multi_insert_sql   naked   negative_boolean_constant_sql_append   new   nowait   offset   options_overlap   or   order   order_append   order_by   order_more   order_prepend   ordered_expression_sql_append   paged_each   paged_each   placeholder_literal_string_sql_append   prepare   provides_accurate_rows_matched?   qualified_identifier_sql_append   qualify   quote_identifier_append   quote_identifiers?   quote_schema_table_append   quoted_identifier_append   recursive_cte_requires_column_aliases?   register_extension   requires_placeholder_type_specifiers?   requires_sql_standard_datetimes?   returning   reverse   reverse_order   row_number_column   row_proc   schema_and_table   select   select_all   select_append   select_group   select_hash   select_hash_groups   select_map   select_more   select_order_map   server   server?   set_graph_aliases   simple_select_all?   single_record   single_record!   single_value   single_value!   single_value_ds   skip_limit_check   skip_locked   split_alias   split_multiple_result_sets   split_qualifiers   sql   stream   subscript_sql_append   sum   supports_cte?   supports_cte_in_subqueries?   supports_derived_column_lists?   supports_distinct_on?   supports_group_cube?   supports_group_rollup?   supports_grouping_sets?   supports_insert_select?   supports_intersect_except?   supports_intersect_except_all?   supports_is_true?   supports_join_using?   supports_lateral_subqueries?   supports_limits_in_correlated_subqueries?   supports_modifying_joins?   supports_multiple_column_in?   supports_nowait?   supports_offsets_in_correlated_subqueries?   supports_ordered_distinct_on?   supports_regexp?   supports_replace?   supports_returning?   supports_select_all_and_column?   supports_skip_locked?   supports_timestamp_timezones?   supports_timestamp_usecs?   supports_where_true?   supports_window_functions?   to_hash   to_hash_groups   to_prepared_statement   truncate   truncate_sql   unfiltered   ungraphed   ungrouped   union   unlimited   unordered   unqualified_column_for   unused_table_alias   update   update_sql   where   where_all   where_each   where_single_value   window_sql_append   with   with_extend   with_quote_identifiers   with_recursive   with_row_proc   with_sql   with_sql_all   with_sql_delete   with_sql_each   with_sql_first   with_sql_insert   with_sql_single_value   with_sql_update  

Included Modules

Constants

OPTS = Sequel::OPTS
TRUE_FREEZE = RUBY_VERSION >= '2.4'   Whether Dataset#freeze can actually freeze datasets. True only on ruby 2.4+, as it requires clone(freeze: false)
STREAMING_SUPPORTED = ::Mysql2::VERSION >= '0.3.12'
PreparedStatementMethods = prepared_statements_module( "sql = self; opts = Hash[opts]; opts[:arguments] = bind_arguments", Sequel::Dataset::UnnumberedArgumentMapper, %w"execute execute_dui execute_insert")

Public Instance methods

Yield all rows matching this dataset. If the dataset is set to split multiple statements, yield arrays of hashes one per statement instead of yielding results for all statements as hashes.

[Source]

     # File lib/sequel/adapters/mysql.rb, line 281
281:       def fetch_rows(sql)
282:         execute(sql) do |r|
283:           i = -1
284:           cps = db.conversion_procs
285:           cols = r.fetch_fields.map do |f| 
286:             # Pretend tinyint is another integer type if its length is not 1, to
287:             # avoid casting to boolean if convert_tinyint_to_bool is set.
288:             type_proc = f.type == 1 && cast_tinyint_integer?(f) ? cps[2] : cps[f.type]
289:             [output_identifier(f.name), type_proc, i+=1]
290:           end
291:           self.columns = cols.map(&:first)
292:           if opts[:split_multiple_result_sets]
293:             s = []
294:             yield_rows(r, cols){|h| s << h}
295:             yield s
296:           else
297:             yield_rows(r, cols){|h| yield h}
298:           end
299:         end
300:         self
301:       end

[Source]

     # File lib/sequel/adapters/mysql2.rb, line 236
236:       def fetch_rows(sql)
237:         execute(sql) do |r|
238:           self.columns = r.fields.map!{|c| output_identifier(c.to_s)}
239:           r.each(:cast_booleans=>convert_tinyint_to_bool?){|h| yield h}
240:         end
241:         self
242:       end

Don‘t allow graphing a dataset that splits multiple statements

[Source]

     # File lib/sequel/adapters/mysql.rb, line 304
304:       def graph(*)
305:         raise(Error, "Can't graph a dataset that splits multiple result sets") if opts[:split_multiple_result_sets]
306:         super
307:       end

Use streaming to implement paging if Mysql2 supports it and it hasn‘t been disabled.

[Source]

     # File lib/sequel/adapters/mysql2.rb, line 246
246:       def paged_each(opts=OPTS, &block)
247:         if STREAMING_SUPPORTED && opts[:stream] != false
248:           stream.each(&block)
249:         else
250:           super
251:         end
252:       end

Makes each yield arrays of rows, with each array containing the rows for a given result set. Does not work with graphing. So you can submit SQL with multiple statements and easily determine which statement returned which results.

Modifies the row_proc of the returned dataset so that it still works as expected (running on the hashes instead of on the arrays of hashes). If you modify the row_proc afterward, note that it will receive an array of hashes instead of a hash.

[Source]

     # File lib/sequel/adapters/mysql.rb, line 318
318:       def split_multiple_result_sets
319:         raise(Error, "Can't split multiple statements on a graphed dataset") if opts[:graph]
320:         ds = clone(:split_multiple_result_sets=>true)
321:         ds = ds.with_row_proc(proc{|x| x.map{|h| row_proc.call(h)}}) if row_proc
322:         ds
323:       end

Return a clone of the dataset that will stream rows when iterating over the result set, so it can handle large datasets that won‘t fit in memory (Requires mysql 0.3.12+ to have an effect).

[Source]

     # File lib/sequel/adapters/mysql2.rb, line 257
257:       def stream
258:         clone(:stream=>true)
259:       end

6 - Miscellaneous methods

These methods don‘t fit cleanly into another section.

Attributes

cache  [R]  Access the cache for the current dataset. Should be used with caution, as access to the cache is not thread safe without a mutex if other threads can reference the dataset. Symbol keys prefixed with an underscore are reserved for internal use.
db  [R]  The database related to this dataset. This is the Database instance that will execute all of this dataset‘s queries.
opts  [R]  The hash of options for this dataset, keys are symbols.

Public Class methods

Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:

  DB[:posts]

Sequel::Dataset is an abstract class that is not useful by itself. Each database adapter provides a subclass of Sequel::Dataset, and has the Database#dataset method return an instance of that subclass.

[Source]

    # File lib/sequel/dataset/misc.rb, line 25
25:     def initialize(db)
26:       @db = db
27:       @opts = OPTS
28:       @cache = {}
29:       freeze
30:     end

Public Instance methods

Define a hash value such that datasets with the same class, DB, and opts will be considered equal.

[Source]

    # File lib/sequel/dataset/misc.rb, line 34
34:     def ==(o)
35:       o.is_a?(self.class) && db == o.db && opts == o.opts
36:     end

An object representing the current date or time, should be an instance of Sequel.datetime_class.

[Source]

    # File lib/sequel/dataset/misc.rb, line 40
40:     def current_datetime
41:       Sequel.datetime_class.now
42:     end

Return self, as datasets are always frozen.

[Source]

    # File lib/sequel/dataset/misc.rb, line 50
50:     def dup
51:       self
52:     end

Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:

  DB[:configs].where(key: 'setting').each_server{|ds| ds.update(value: 'new_value')}

[Source]

    # File lib/sequel/dataset/misc.rb, line 59
59:     def each_server
60:       db.servers.each{|s| yield server(s)}
61:     end

Alias for ==

[Source]

    # File lib/sequel/dataset/misc.rb, line 45
45:     def eql?(o)
46:       self == o
47:     end

Returns the string with the LIKE metacharacters (% and _) escaped. Useful for when the LIKE term is a user-provided string where metacharacters should not be recognized. Example:

  ds.escape_like("foo\\%_") # 'foo\\\%\_'

[Source]

    # File lib/sequel/dataset/misc.rb, line 68
68:     def escape_like(string)
69:       string.gsub(/[\\%_]/){|m| "\\#{m}"}
70:     end

Alias of first_source_alias

[Source]

    # File lib/sequel/dataset/misc.rb, line 91
91:     def first_source
92:       first_source_alias
93:     end

The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an Error. If the table is aliased, returns the aliased name.

  DB[:table].first_source_alias
  # => :table

  DB[Sequel[:table].as(:t)].first_source_alias
  # => :t

[Source]

     # File lib/sequel/dataset/misc.rb, line 103
103:     def first_source_alias
104:       source = @opts[:from]
105:       if source.nil? || source.empty?
106:         raise Error, 'No source specified for query'
107:       end
108:       case s = source.first
109:       when SQL::AliasedExpression
110:         s.alias
111:       when Symbol
112:         _, _, aliaz = split_symbol(s)
113:         aliaz ? aliaz.to_sym : s
114:       else
115:         s
116:       end
117:     end

The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the original table, not the alias

  DB[:table].first_source_table
  # => :table

  DB[Sequel[:table].as(:t)].first_source_table
  # => :table

[Source]

     # File lib/sequel/dataset/misc.rb, line 128
128:     def first_source_table
129:       source = @opts[:from]
130:       if source.nil? || source.empty?
131:         raise Error, 'No source specified for query'
132:       end
133:       case s = source.first
134:       when SQL::AliasedExpression
135:         s.expression
136:       when Symbol
137:         sch, table, aliaz = split_symbol(s)
138:         aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s
139:       else
140:         s
141:       end
142:     end

Freeze the opts when freezing the dataset.

[Source]

    # File lib/sequel/dataset/misc.rb, line 74
74:       def freeze
75:         @opts.freeze
76:         super
77:       end

Define a hash value such that datasets with the same class, DB, and opts, will have the same hash value.

[Source]

     # File lib/sequel/dataset/misc.rb, line 146
146:     def hash
147:       [self.class, db, opts].hash
148:     end

Returns a string representation of the dataset including the class name and the corresponding SQL select statement.

[Source]

     # File lib/sequel/dataset/misc.rb, line 152
152:     def inspect
153:       "#<#{visible_class_name}: #{sql.inspect}>"
154:     end

Whether this dataset is a joined dataset (multiple FROM tables or any JOINs).

[Source]

     # File lib/sequel/dataset/misc.rb, line 157
157:     def joined_dataset?
158:      !!((opts[:from].is_a?(Array) && opts[:from].size > 1) || opts[:join])
159:     end

The alias to use for the row_number column, used when emulating OFFSET support and for eager limit strategies

[Source]

     # File lib/sequel/dataset/misc.rb, line 163
163:     def row_number_column
164:       :x_sequel_row_number_x
165:     end

The row_proc for this database, should be any object that responds to call with a single hash argument and returns the object you want each to return.

[Source]

     # File lib/sequel/dataset/misc.rb, line 169
169:     def row_proc
170:       @opts[:row_proc]
171:     end

Splits a possible implicit alias in c, handling both SQL::AliasedExpressions and Symbols. Returns an array of two elements, with the first being the main expression, and the second being the alias.

[Source]

     # File lib/sequel/dataset/misc.rb, line 176
176:     def split_alias(c)
177:       case c
178:       when Symbol
179:         c_table, column, aliaz = split_symbol(c)
180:         [c_table ? SQL::QualifiedIdentifier.new(c_table, column.to_sym) : column.to_sym, aliaz]
181:       when SQL::AliasedExpression
182:         [c.expression, c.alias]
183:       when SQL::JoinClause
184:         [c.table, c.table_alias]
185:       else
186:         [c, nil]
187:       end
188:     end

This returns an SQL::Identifier or SQL::AliasedExpression containing an SQL identifier that represents the unqualified column for the given value. The given value should be a Symbol, SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression containing one of those. In other cases, this returns nil.

[Source]

     # File lib/sequel/dataset/misc.rb, line 195
195:     def unqualified_column_for(v)
196:       unless v.is_a?(String)
197:         _unqualified_column_for(v)
198:       end
199:     end

Creates a unique table alias that hasn‘t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with "_N" if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.

You can provide a second addition array argument containing symbols that should not be considered valid table aliases. The current aliases for the FROM and JOIN tables are automatically included in this array.

  DB[:table].unused_table_alias(:t)
  # => :t

  DB[:table].unused_table_alias(:table)
  # => :table_0

  DB[:table, :table_0].unused_table_alias(:table)
  # => :table_1

  DB[:table, :table_0].unused_table_alias(:table, [:table_1, :table_2])
  # => :table_3

[Source]

     # File lib/sequel/dataset/misc.rb, line 223
223:     def unused_table_alias(table_alias, used_aliases = [])
224:       table_alias = alias_symbol(table_alias)
225:       used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from]
226:       used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join]
227:       if used_aliases.include?(table_alias)
228:         i = 0
229:         while true
230:           ta = "#{table_alias}_#{i}""#{table_alias}_#{i}"
231:           return ta unless used_aliases.include?(ta)
232:           i += 1 
233:         end
234:       else
235:         table_alias
236:       end
237:     end

Return a modified dataset with quote_identifiers set.

[Source]

     # File lib/sequel/dataset/misc.rb, line 240
240:     def with_quote_identifiers(v)
241:       clone(:quote_identifiers=>v, :skip_symbol_cache=>true)
242:     end

Protected Instance methods

The cached columns for the current dataset.

[Source]

     # File lib/sequel/dataset/misc.rb, line 271
271:     def _columns
272:       cache_get(:_columns)
273:     end

Retreive a value from the dataset‘s cache in a thread safe manner.

[Source]

     # File lib/sequel/dataset/misc.rb, line 253
253:     def cache_get(k)
254:       Sequel.synchronize{@cache[k]}
255:     end

Set a value in the dataset‘s cache in a thread safe manner.

[Source]

     # File lib/sequel/dataset/misc.rb, line 258
258:     def cache_set(k, v)
259:       Sequel.synchronize{@cache[k] = v}
260:     end

Clear the columns hash for the current dataset. This is not a thread safe operation, so it should only be used if the dataset could not be used by another thread (such as one that was just created via clone).

[Source]

     # File lib/sequel/dataset/misc.rb, line 266
266:     def clear_columns_cache
267:       @cache.delete(:_columns)
268:     end

2 - Methods that execute code on the database

These methods all execute the dataset‘s SQL on the database. They don‘t return modified datasets, so if used in a method chain they should be the last method called.

Classes and Modules

Class Sequel::Dataset::DatasetModule

Constants

ACTION_METHODS = (<<-METHS).split.map(&:to_sym).freeze << [] all as_hash avg count columns columns! delete each empty? fetch_rows first first! get import insert last map max min multi_insert paged_each select_hash select_hash_groups select_map select_order_map single_record single_record! single_value single_value! sum to_hash to_hash_groups truncate update where_all where_each where_single_value METHS ).split.map(&:to_sym).freeze   Action methods defined by Sequel that execute code on the database.
COLUMNS_CLONE_OPTIONS = {:distinct => nil, :limit => 1, :offset=>nil, :where=>nil, :having=>nil, :order=>nil, :row_proc=>nil, :graph=>nil, :eager_graph=>nil}.freeze   The clone options to use when retriveing columns for a dataset.
COUNT_SELECT = Sequel.function(:count).*.as(:count)
EMPTY_SELECT = Sequel::SQL::AliasedExpression.new(1, :one)

Public Instance methods

Inserts the given argument into the database. Returns self so it can be used safely when chaining:

  DB[:items] << {id: 0, name: 'Zero'} << DB[:old_items].select(:id, name)

[Source]

    # File lib/sequel/dataset/actions.rb, line 29
29:     def <<(arg)
30:       insert(arg)
31:       self
32:     end

Returns the first record matching the conditions. Examples:

  DB[:table][id: 1] # SELECT * FROM table WHERE (id = 1) LIMIT 1
  # => {:id=>1}

[Source]

    # File lib/sequel/dataset/actions.rb, line 38
38:     def [](*conditions)
39:       raise(Error, 'You cannot call Dataset#[] with an integer or with no arguments') if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0
40:       first(*conditions)
41:     end

Returns an array with all records in the dataset. If a block is given, the array is iterated over after all items have been loaded.

  DB[:table].all # SELECT * FROM table
  # => [{:id=>1, ...}, {:id=>2, ...}, ...]

  # Iterate over all rows in the table
  DB[:table].all{|row| p row}

[Source]

    # File lib/sequel/dataset/actions.rb, line 51
51:     def all(&block)
52:       _all(block){|a| each{|r| a << r}}
53:     end

Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.

  DB[:table].as_hash(:id, :name) # SELECT * FROM table
  # {1=>'Jim', 2=>'Bob', ...}

  DB[:table].as_hash(:id) # SELECT * FROM table
  # {1=>{:id=>1, :name=>'Jim'}, 2=>{:id=>2, :name=>'Bob'}, ...}

You can also provide an array of column names for either the key_column, the value column, or both:

  DB[:table].as_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table
  # {[1, 3]=>['Jim', 'bo'], [2, 4]=>['Bob', 'be'], ...}

  DB[:table].as_hash([:id, :name]) # SELECT * FROM table
  # {[1, 'Jim']=>{:id=>1, :name=>'Jim'}, [2, 'Bob']=>{:id=>2, :name=>'Bob'}, ...}

Options:

:all :Use all instead of each to retrieve the objects
:hash :The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.

[Source]

     # File lib/sequel/dataset/actions.rb, line 765
765:     def as_hash(key_column, value_column = nil, opts = OPTS)
766:       h = opts[:hash] || {}
767:       meth = opts[:all] ? :all : :each
768:       if value_column
769:         return naked.as_hash(key_column, value_column, opts) if row_proc
770:         if value_column.is_a?(Array)
771:           if key_column.is_a?(Array)
772:             public_send(meth){|r| h[r.values_at(*key_column)] = r.values_at(*value_column)}
773:           else
774:             public_send(meth){|r| h[r[key_column]] = r.values_at(*value_column)}
775:           end
776:         else
777:           if key_column.is_a?(Array)
778:             public_send(meth){|r| h[r.values_at(*key_column)] = r[value_column]}
779:           else
780:             public_send(meth){|r| h[r[key_column]] = r[value_column]}
781:           end
782:         end
783:       elsif key_column.is_a?(Array)
784:         public_send(meth){|r| h[key_column.map{|k| r[k]}] = r}
785:       else
786:         public_send(meth){|r| h[r[key_column]] = r}
787:       end
788:       h
789:     end

Returns the average value for the given column/expression. Uses a virtual row block if no argument is given.

  DB[:table].avg(:number) # SELECT avg(number) FROM table LIMIT 1
  # => 3
  DB[:table].avg{function(column)} # SELECT avg(function(column)) FROM table LIMIT 1
  # => 1

[Source]

    # File lib/sequel/dataset/actions.rb, line 62
62:     def avg(arg=Sequel.virtual_row(&Proc.new))
63:       _aggregate(:avg, arg)
64:     end

Returns the columns in the result set in order as an array of symbols. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to retrieve a single row in order to get the columns.

If you are looking for all columns for a single table and maybe some information about each column (e.g. database type), see Database#schema.

  DB[:table].columns
  # => [:id, :name]

[Source]

    # File lib/sequel/dataset/actions.rb, line 75
75:     def columns
76:       _columns || columns!
77:     end

Ignore any cached column information and perform a query to retrieve a row in order to get the columns.

  DB[:table].columns!
  # => [:id, :name]

[Source]

    # File lib/sequel/dataset/actions.rb, line 84
84:     def columns!
85:       ds = clone(COLUMNS_CLONE_OPTIONS)
86:       ds.each{break}
87: 
88:       if cols = ds.cache[:_columns]
89:         self.columns = cols
90:       else
91:         []
92:       end
93:     end

Returns the number of records in the dataset. If an argument is provided, it is used as the argument to count. If a block is provided, it is treated as a virtual row, and the result is used as the argument to count.

  DB[:table].count # SELECT count(*) AS count FROM table LIMIT 1
  # => 3
  DB[:table].count(:column) # SELECT count(column) AS count FROM table LIMIT 1
  # => 2
  DB[:table].count{foo(column)} # SELECT count(foo(column)) AS count FROM table LIMIT 1
  # => 1

[Source]

     # File lib/sequel/dataset/actions.rb, line 108
108:     def count(arg=(no_arg=true), &block)
109:       if no_arg && !block
110:         cached_dataset(:_count_ds) do
111:           aggregate_dataset.select(COUNT_SELECT).single_value_ds
112:         end.single_value!.to_i
113:       else
114:         if block
115:           if no_arg
116:             arg = Sequel.virtual_row(&block)
117:           else
118:             raise Error, 'cannot provide both argument and block to Dataset#count'
119:           end
120:         end
121: 
122:         _aggregate(:count, arg)
123:       end
124:     end

Deletes the records in the dataset, returning the number of records deleted.

  DB[:table].delete # DELETE * FROM table
  # => 3

[Source]

     # File lib/sequel/dataset/actions.rb, line 130
130:     def delete(&block)
131:       sql = delete_sql
132:       if uses_returning?(:delete)
133:         returning_fetch_rows(sql, &block)
134:       else
135:         execute_dui(sql)
136:       end
137:     end

Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.

  DB[:table].each{|row| p row} # SELECT * FROM table

Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you should use all instead of each for the outer queries, or use a separate thread or shard inside each.

[Source]

     # File lib/sequel/dataset/actions.rb, line 148
148:     def each
149:       if rp = row_proc
150:         fetch_rows(select_sql){|r| yield rp.call(r)}
151:       else
152:         fetch_rows(select_sql){|r| yield r}
153:       end
154:       self
155:     end

Returns true if no records exist in the dataset, false otherwise

  DB[:table].empty? # SELECT 1 AS one FROM table LIMIT 1
  # => false

[Source]

     # File lib/sequel/dataset/actions.rb, line 163
163:     def empty?
164:       cached_dataset(:_empty_ds) do
165:         single_value_ds.unordered.select(EMPTY_SELECT)
166:       end.single_value!.nil?
167:     end

Returns the first matching record if no arguments are given. If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If any other type of argument(s) is passed, it is treated as a filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything.

If there are no records in the dataset, returns nil (or an empty array if an integer argument is given).

Examples:

  DB[:table].first # SELECT * FROM table LIMIT 1
  # => {:id=>7}

  DB[:table].first(2) # SELECT * FROM table LIMIT 2
  # => [{:id=>6}, {:id=>4}]

  DB[:table].first(id: 2) # SELECT * FROM table WHERE (id = 2) LIMIT 1
  # => {:id=>2}

  DB[:table].first(Sequel.lit("id = 3")) # SELECT * FROM table WHERE (id = 3) LIMIT 1
  # => {:id=>3}

  DB[:table].first(Sequel.lit("id = ?", 4)) # SELECT * FROM table WHERE (id = 4) LIMIT 1
  # => {:id=>4}

  DB[:table].first{id > 2} # SELECT * FROM table WHERE (id > 2) LIMIT 1
  # => {:id=>5}

  DB[:table].first(Sequel.lit("id > ?", 4)){id < 6} # SELECT * FROM table WHERE ((id > 4) AND (id < 6)) LIMIT 1
  # => {:id=>5}

  DB[:table].first(2){id < 2} # SELECT * FROM table WHERE (id < 2) LIMIT 2
  # => [{:id=>1}]

[Source]

     # File lib/sequel/dataset/actions.rb, line 204
204:     def first(*args, &block)
205:       case args.length
206:       when 0
207:         unless block
208:           return single_record
209:         end
210:       when 1
211:         arg = args[0]
212:         if arg.is_a?(Integer)
213:           res = if block
214:             if loader = cached_placeholder_literalizer(:_first_integer_cond_loader) do |pl|
215:                 where(pl.arg).limit(pl.arg)
216:               end
217: 
218:               loader.all(filter_expr(&block), arg)
219:             else
220:               where(&block).limit(arg).all
221:             end
222:           else
223:             if loader = cached_placeholder_literalizer(:_first_integer_loader) do |pl|
224:                limit(pl.arg)
225:               end
226: 
227:               loader.all(arg)
228:             else
229:               limit(arg).all
230:             end
231:           end
232: 
233:           return res
234:         end
235:         args = arg
236:       end
237: 
238:       if loader = cached_placeholder_literalizer(:_first_cond_loader) do |pl|
239:           _single_record_ds.where(pl.arg)
240:         end
241: 
242:         loader.first(filter_expr(args, &block))
243:       else
244:         _single_record_ds.where(args, &block).single_record!
245:       end
246:     end

Calls first. If first returns nil (signaling that no row matches), raise a Sequel::NoMatchingRow exception.

[Source]

     # File lib/sequel/dataset/actions.rb, line 250
250:     def first!(*args, &block)
251:       first(*args, &block) || raise(Sequel::NoMatchingRow.new(self))
252:     end

Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.

  DB[:table].get(:id) # SELECT id FROM table LIMIT 1
  # => 3

  ds.get{sum(id)} # SELECT sum(id) AS v FROM table LIMIT 1
  # => 6

You can pass an array of arguments to return multiple arguments, but you must make sure each element in the array has an alias that Sequel can determine:

  DB[:table].get([:id, :name]) # SELECT id, name FROM table LIMIT 1
  # => [3, 'foo']

  DB[:table].get{[sum(id).as(sum), name]} # SELECT sum(id) AS sum, name FROM table LIMIT 1
  # => [6, 'foo']

[Source]

     # File lib/sequel/dataset/actions.rb, line 272
272:     def get(column=(no_arg=true; nil), &block)
273:       ds = naked
274:       if block
275:         raise(Error, 'Must call Dataset#get with an argument or a block, not both') unless no_arg
276:         ds = ds.select(&block)
277:         column = ds.opts[:select]
278:         column = nil if column.is_a?(Array) && column.length < 2
279:       else
280:         case column
281:         when Array
282:           ds = ds.select(*column)
283:         when LiteralString, Symbol, SQL::Identifier, SQL::QualifiedIdentifier, SQL::AliasedExpression
284:           if loader = cached_placeholder_literalizer(:_get_loader) do |pl|
285:               ds.single_value_ds.select(pl.arg)
286:             end
287: 
288:             return loader.get(column)
289:           end
290: 
291:           ds = ds.select(column)
292:         else
293:           if loader = cached_placeholder_literalizer(:_get_alias_loader) do |pl|
294:               ds.single_value_ds.select(Sequel.as(pl.arg, :v))
295:             end
296: 
297:             return loader.get(column)
298:           end
299: 
300:           ds = ds.select(Sequel.as(column, :v))
301:         end
302:       end
303: 
304:       if column.is_a?(Array)
305:        if r = ds.single_record
306:          r.values_at(*hash_key_symbols(column))
307:        end
308:       else
309:         ds.single_value
310:       end
311:     end

Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.

This method is called with a columns array and an array of value arrays:

  DB[:table].import([:x, :y], [[1, 2], [3, 4]])
  # INSERT INTO table (x, y) VALUES (1, 2)
  # INSERT INTO table (x, y) VALUES (3, 4)

This method also accepts a dataset instead of an array of value arrays:

  DB[:table].import([:x, :y], DB[:table2].select(:a, :b))
  # INSERT INTO table (x, y) SELECT a, b FROM table2

Options:

:commit_every :Open a new transaction for every given number of records. For example, if you provide a value of 50, will commit after every 50 records.
:return :When this is set to :primary_key, returns an array of autoincremented primary key values for the rows inserted.
:server :Set the server/shard to use for the transaction and insert queries.
:slice :Same as :commit_every, :commit_every takes precedence.

[Source]

     # File lib/sequel/dataset/actions.rb, line 338
338:     def import(columns, values, opts=OPTS)
339:       return @db.transaction{insert(columns, values)} if values.is_a?(Dataset)
340: 
341:       return if values.empty?
342:       raise(Error, 'Using Sequel::Dataset#import with an empty column array is not allowed') if columns.empty?
343:       ds = opts[:server] ? server(opts[:server]) : self
344:       
345:       if slice_size = opts.fetch(:commit_every, opts.fetch(:slice, default_import_slice))
346:         offset = 0
347:         rows = []
348:         while offset < values.length
349:           rows << ds._import(columns, values[offset, slice_size], opts)
350:           offset += slice_size
351:         end
352:         rows.flatten
353:       else
354:         ds._import(columns, values, opts)
355:       end
356:     end

Inserts values into the associated table. The returned value is generally the value of the autoincremented primary key for the inserted row, assuming that the a single row is inserted and the table has an autoincrementing primary key.

insert handles a number of different argument formats:

no arguments or single empty hash :Uses DEFAULT VALUES
single hash :Most common format, treats keys as columns and values as values
single array :Treats entries as values, with no columns
two arrays :Treats first array as columns, second array as values
single Dataset :Treats as an insert based on a selection from the dataset given, with no columns
array and dataset :Treats as an insert based on a selection from the dataset given, with the columns given by the array.

Examples:

  DB[:items].insert
  # INSERT INTO items DEFAULT VALUES

  DB[:items].insert({})
  # INSERT INTO items DEFAULT VALUES

  DB[:items].insert([1,2,3])
  # INSERT INTO items VALUES (1, 2, 3)

  DB[:items].insert([:a, :b], [1,2])
  # INSERT INTO items (a, b) VALUES (1, 2)

  DB[:items].insert(a: 1, b: 2)
  # INSERT INTO items (a, b) VALUES (1, 2)

  DB[:items].insert(DB[:old_items])
  # INSERT INTO items SELECT * FROM old_items

  DB[:items].insert([:a, :b], DB[:old_items])
  # INSERT INTO items (a, b) SELECT * FROM old_items

[Source]

     # File lib/sequel/dataset/actions.rb, line 394
394:     def insert(*values, &block)
395:       sql = insert_sql(*values)
396:       if uses_returning?(:insert)
397:         returning_fetch_rows(sql, &block)
398:       else
399:         execute_insert(sql)
400:       end
401:     end

Reverses the order and then runs first with the given arguments and block. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.

  DB[:table].order(:id).last # SELECT * FROM table ORDER BY id DESC LIMIT 1
  # => {:id=>10}

  DB[:table].order(Sequel.desc(:id)).last(2) # SELECT * FROM table ORDER BY id ASC LIMIT 2
  # => [{:id=>1}, {:id=>2}]

[Source]

     # File lib/sequel/dataset/actions.rb, line 413
413:     def last(*args, &block)
414:       raise(Error, 'No order specified') unless @opts[:order]
415:       reverse.first(*args, &block)
416:     end

Maps column values for each record in the dataset (if an argument is given) or performs the stock mapping functionality of Enumerable otherwise. Raises an Error if both an argument and block are given.

  DB[:table].map(:id) # SELECT * FROM table
  # => [1, 2, 3, ...]

  DB[:table].map{|r| r[:id] * 2} # SELECT * FROM table
  # => [2, 4, 6, ...]

You can also provide an array of column names:

  DB[:table].map([:id, :name]) # SELECT * FROM table
  # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]

[Source]

     # File lib/sequel/dataset/actions.rb, line 432
432:     def map(column=nil, &block)
433:       if column
434:         raise(Error, 'Must call Dataset#map with either an argument or a block, not both') if block
435:         return naked.map(column) if row_proc
436:         if column.is_a?(Array)
437:           super(){|r| r.values_at(*column)}
438:         else
439:           super(){|r| r[column]}
440:         end
441:       else
442:         super(&block)
443:       end
444:     end

Returns the maximum value for the given column/expression. Uses a virtual row block if no argument is given.

  DB[:table].max(:id) # SELECT max(id) FROM table LIMIT 1
  # => 10
  DB[:table].max{function(column)} # SELECT max(function(column)) FROM table LIMIT 1
  # => 7

[Source]

     # File lib/sequel/dataset/actions.rb, line 453
453:     def max(arg=Sequel.virtual_row(&Proc.new))
454:       _aggregate(:max, arg)
455:     end

Returns the minimum value for the given column/expression. Uses a virtual row block if no argument is given.

  DB[:table].min(:id) # SELECT min(id) FROM table LIMIT 1
  # => 1
  DB[:table].min{function(column)} # SELECT min(function(column)) FROM table LIMIT 1
  # => 0

[Source]

     # File lib/sequel/dataset/actions.rb, line 464
464:     def min(arg=Sequel.virtual_row(&Proc.new))
465:       _aggregate(:min, arg)
466:     end

This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:

  DB[:table].multi_insert([{x: 1}, {x: 2}])
  # INSERT INTO table (x) VALUES (1)
  # INSERT INTO table (x) VALUES (2)

Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.

This respects the same options as import.

[Source]

     # File lib/sequel/dataset/actions.rb, line 480
480:     def multi_insert(hashes, opts=OPTS)
481:       return if hashes.empty?
482:       columns = hashes.first.keys
483:       import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts)
484:     end

Yields each row in the dataset, but interally uses multiple queries as needed to process the entire result set without keeping all rows in the dataset in memory, even if the underlying driver buffers all query results in memory.

Because this uses multiple queries internally, in order to remain consistent, it also uses a transaction internally. Additionally, to work correctly, the dataset must have unambiguous order. Using an ambiguous order can result in an infinite loop, as well as subtler bugs such as yielding duplicate rows or rows being skipped.

Sequel checks that the datasets using this method have an order, but it cannot ensure that the order is unambiguous.

Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, use a separate thread or shard inside paged_each.

Options:

:rows_per_fetch :The number of rows to fetch per query. Defaults to 1000.
:strategy :The strategy to use for paging of results. By default this is :offset, for using an approach with a limit and offset for every page. This can be set to :filter, which uses a limit and a filter that excludes rows from previous pages. In order for this strategy to work, you must be selecting the columns you are ordering by, and none of the columns can contain NULLs. Note that some Sequel adapters have optimized implementations that will use cursors or streaming regardless of the :strategy option used.
:filter_values :If the strategy: :filter option is used, this option should be a proc that accepts the last retreived row for the previous page and an array of ORDER BY expressions, and returns an array of values relating to those expressions for the last retrieved row. You will need to use this option if your ORDER BY expressions are not simple columns, if they contain qualified identifiers that would be ambiguous unqualified, if they contain any identifiers that are aliased in SELECT, and potentially other cases.

Examples:

  DB[:table].order(:id).paged_each{|row| }
  # SELECT * FROM table ORDER BY id LIMIT 1000
  # SELECT * FROM table ORDER BY id LIMIT 1000 OFFSET 1000
  # ...

  DB[:table].order(:id).paged_each(:rows_per_fetch=>100){|row| }
  # SELECT * FROM table ORDER BY id LIMIT 100
  # SELECT * FROM table ORDER BY id LIMIT 100 OFFSET 100
  # ...

  DB[:table].order(:id).paged_each(strategy: :filter){|row| }
  # SELECT * FROM table ORDER BY id LIMIT 1000
  # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
  # ...

  DB[:table].order(:id).paged_each(strategy: :filter,
    filter_values: lambda{|row, exprs| [row[:id]]}){|row| }
  # SELECT * FROM table ORDER BY id LIMIT 1000
  # SELECT * FROM table WHERE id > 1001 ORDER BY id LIMIT 1000
  # ...

[Source]

     # File lib/sequel/dataset/actions.rb, line 541
541:     def paged_each(opts=OPTS)
542:       unless @opts[:order]
543:         raise Sequel::Error, "Dataset#paged_each requires the dataset be ordered"
544:       end
545:       unless block_given?
546:         return enum_for(:paged_each, opts)
547:       end
548: 
549:       total_limit = @opts[:limit]
550:       offset = @opts[:offset]
551:       if server = @opts[:server]
552:         opts = Hash[opts]
553:         opts[:server] = server
554:       end
555: 
556:       rows_per_fetch = opts[:rows_per_fetch] || 1000
557:       strategy = if offset || total_limit
558:         :offset
559:       else
560:         opts[:strategy] || :offset
561:       end
562: 
563:       db.transaction(opts) do
564:         case strategy
565:         when :filter
566:           filter_values = opts[:filter_values] || proc{|row, exprs| exprs.map{|e| row[hash_key_symbol(e)]}}
567:           base_ds = ds = limit(rows_per_fetch)
568:           while ds
569:             last_row = nil
570:             ds.each do |row|
571:               last_row = row
572:               yield row
573:             end
574:             ds = (base_ds.where(ignore_values_preceding(last_row, &filter_values)) if last_row)
575:           end
576:         else
577:           offset ||= 0
578:           num_rows_yielded = rows_per_fetch
579:           total_rows = 0
580: 
581:           while num_rows_yielded == rows_per_fetch && (total_limit.nil? || total_rows < total_limit)
582:             if total_limit && total_rows + rows_per_fetch > total_limit
583:               rows_per_fetch = total_limit - total_rows
584:             end
585: 
586:             num_rows_yielded = 0
587:             limit(rows_per_fetch, offset).each do |row|
588:               num_rows_yielded += 1
589:               total_rows += 1 if total_limit
590:               yield row
591:             end
592: 
593:             offset += rows_per_fetch
594:           end
595:         end
596:       end
597: 
598:       self
599:     end

Returns a hash with key_column values as keys and value_column values as values. Similar to as_hash, but only selects the columns given. Like as_hash, it accepts an optional :hash parameter, into which entries will be merged.

  DB[:table].select_hash(:id, :name) # SELECT id, name FROM table
  # => {1=>'a', 2=>'b', ...}

You can also provide an array of column names for either the key_column, the value column, or both:

  DB[:table].select_hash([:id, :foo], [:name, :bar]) # SELECT * FROM table
  # {[1, 3]=>['a', 'c'], [2, 4]=>['b', 'd'], ...}

When using this method, you must be sure that each expression has an alias that Sequel can determine.

[Source]

     # File lib/sequel/dataset/actions.rb, line 617
617:     def select_hash(key_column, value_column, opts = OPTS)
618:       _select_hash(:as_hash, key_column, value_column, opts)
619:     end

Returns a hash with key_column values as keys and an array of value_column values. Similar to to_hash_groups, but only selects the columns given. Like to_hash_groups, it accepts an optional :hash parameter, into which entries will be merged.

  DB[:table].select_hash_groups(:name, :id) # SELECT id, name FROM table
  # => {'a'=>[1, 4, ...], 'b'=>[2, ...], ...}

You can also provide an array of column names for either the key_column, the value column, or both:

  DB[:table].select_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table
  # {['a', 'b']=>[['c', 1], ['d', 2], ...], ...}

When using this method, you must be sure that each expression has an alias that Sequel can determine.

[Source]

     # File lib/sequel/dataset/actions.rb, line 636
636:     def select_hash_groups(key_column, value_column, opts = OPTS)
637:       _select_hash(:to_hash_groups, key_column, value_column, opts)
638:     end

Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined. Raises an Error if called with both an argument and a block.

  DB[:table].select_map(:id) # SELECT id FROM table
  # => [3, 5, 8, 1, ...]

  DB[:table].select_map{id * 2} # SELECT (id * 2) FROM table
  # => [6, 10, 16, 2, ...]

You can also provide an array of column names:

  DB[:table].select_map([:id, :name]) # SELECT id, name FROM table
  # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]

If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.

[Source]

     # File lib/sequel/dataset/actions.rb, line 659
659:     def select_map(column=nil, &block)
660:       _select_map(column, false, &block)
661:     end

The same as select_map, but in addition orders the array by the column.

  DB[:table].select_order_map(:id) # SELECT id FROM table ORDER BY id
  # => [1, 2, 3, 4, ...]

  DB[:table].select_order_map{id * 2} # SELECT (id * 2) FROM table ORDER BY (id * 2)
  # => [2, 4, 6, 8, ...]

You can also provide an array of column names:

  DB[:table].select_order_map([:id, :name]) # SELECT id, name FROM table ORDER BY id, name
  # => [[1, 'A'], [2, 'B'], [3, 'C'], ...]

If you provide an array of expressions, you must be sure that each entry in the array has an alias that Sequel can determine.

[Source]

     # File lib/sequel/dataset/actions.rb, line 678
678:     def select_order_map(column=nil, &block)
679:       _select_map(column, true, &block)
680:     end

Limits the dataset to one record, and returns the first record in the dataset, or nil if the dataset has no records. Users should probably use first instead of this method. Example:

  DB[:test].single_record # SELECT * FROM test LIMIT 1
  # => {:column_name=>'value'}

[Source]

     # File lib/sequel/dataset/actions.rb, line 688
688:     def single_record
689:       _single_record_ds.single_record!
690:     end

Returns the first record in dataset, without limiting the dataset. Returns nil if the dataset has no records. Users should probably use first instead of this method. This should only be used if you know the dataset is already limited to a single record. This method may be desirable to use for performance reasons, as it does not clone the receiver. Example:

  DB[:test].single_record! # SELECT * FROM test
  # => {:column_name=>'value'}

[Source]

     # File lib/sequel/dataset/actions.rb, line 700
700:     def single_record!
701:       with_sql_first(select_sql)
702:     end

Returns the first value of the first record in the dataset. Returns nil if dataset is empty. Users should generally use get instead of this method. Example:

  DB[:test].single_value # SELECT * FROM test LIMIT 1
  # => 'value'

[Source]

     # File lib/sequel/dataset/actions.rb, line 710
710:     def single_value
711:       single_value_ds.each do |r|
712:         r.each{|_, v| return v}
713:       end
714:       nil
715:     end

Returns the first value of the first record in the dataset, without limiting the dataset. Returns nil if the dataset is empty. Users should generally use get instead of this method. Should not be used on graphed datasets or datasets that have row_procs that don‘t return hashes. This method may be desirable to use for performance reasons, as it does not clone the receiver.

  DB[:test].single_value! # SELECT * FROM test
  # => 'value'

[Source]

     # File lib/sequel/dataset/actions.rb, line 725
725:     def single_value!
726:       with_sql_single_value(select_sql)
727:     end

Returns the sum for the given column/expression. Uses a virtual row block if no column is given.

  DB[:table].sum(:id) # SELECT sum(id) FROM table LIMIT 1
  # => 55
  DB[:table].sum{function(column)} # SELECT sum(function(column)) FROM table LIMIT 1
  # => 10

[Source]

     # File lib/sequel/dataset/actions.rb, line 736
736:     def sum(arg=Sequel.virtual_row(&Proc.new))
737:       _aggregate(:sum, arg)
738:     end

Alias of as_hash for backwards compatibility.

[Source]

     # File lib/sequel/dataset/actions.rb, line 792
792:     def to_hash(*a)
793:       as_hash(*a)
794:     end

Returns a hash with one column used as key and the values being an array of column values. If the value_column is not given or nil, uses the entire hash as the value.

  DB[:table].to_hash_groups(:name, :id) # SELECT * FROM table
  # {'Jim'=>[1, 4, 16, ...], 'Bob'=>[2], ...}

  DB[:table].to_hash_groups(:name) # SELECT * FROM table
  # {'Jim'=>[{:id=>1, :name=>'Jim'}, {:id=>4, :name=>'Jim'}, ...], 'Bob'=>[{:id=>2, :name=>'Bob'}], ...}

You can also provide an array of column names for either the key_column, the value column, or both:

  DB[:table].to_hash_groups([:first, :middle], [:last, :id]) # SELECT * FROM table
  # {['Jim', 'Bob']=>[['Smith', 1], ['Jackson', 4], ...], ...}

  DB[:table].to_hash_groups([:first, :middle]) # SELECT * FROM table
  # {['Jim', 'Bob']=>[{:id=>1, :first=>'Jim', :middle=>'Bob', :last=>'Smith'}, ...], ...}

Options:

:all :Use all instead of each to retrieve the objects
:hash :The object into which the values will be placed. If this is not given, an empty hash is used. This can be used to use a hash with a default value or default proc.

[Source]

     # File lib/sequel/dataset/actions.rb, line 820
820:     def to_hash_groups(key_column, value_column = nil, opts = OPTS)
821:       h = opts[:hash] || {}
822:       meth = opts[:all] ? :all : :each
823:       if value_column
824:         return naked.to_hash_groups(key_column, value_column, opts) if row_proc
825:         if value_column.is_a?(Array)
826:           if key_column.is_a?(Array)
827:             public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r.values_at(*value_column)}
828:           else
829:             public_send(meth){|r| (h[r[key_column]] ||= []) << r.values_at(*value_column)}
830:           end
831:         else
832:           if key_column.is_a?(Array)
833:             public_send(meth){|r| (h[r.values_at(*key_column)] ||= []) << r[value_column]}
834:           else
835:             public_send(meth){|r| (h[r[key_column]] ||= []) << r[value_column]}
836:           end
837:         end
838:       elsif key_column.is_a?(Array)
839:         public_send(meth){|r| (h[key_column.map{|k| r[k]}] ||= []) << r}
840:       else
841:         public_send(meth){|r| (h[r[key_column]] ||= []) << r}
842:       end
843:       h
844:     end

Truncates the dataset. Returns nil.

  DB[:table].truncate # TRUNCATE table
  # => nil

[Source]

     # File lib/sequel/dataset/actions.rb, line 850
850:     def truncate
851:       execute_ddl(truncate_sql)
852:     end

Updates values for the dataset. The returned value is the number of rows updated. values should be a hash where the keys are columns to set and values are the values to which to set the columns.

  DB[:table].update(x: nil) # UPDATE table SET x = NULL
  # => 10

  DB[:table].update(x: Sequel[:x]+1, y: 0) # UPDATE table SET x = (x + 1), y = 0
  # => 10

[Source]

     # File lib/sequel/dataset/actions.rb, line 863
863:     def update(values=OPTS, &block)
864:       sql = update_sql(values)
865:       if uses_returning?(:update)
866:         returning_fetch_rows(sql, &block)
867:       else
868:         execute_dui(sql)
869:       end
870:     end

Return an array of all rows matching the given filter condition, also yielding each row to the given block. Basically the same as where(cond).all(&block), except it can be optimized to not create an intermediate dataset.

  DB[:table].where_all(id: [1,2,3])
  # SELECT * FROM table WHERE (id IN (1, 2, 3))

[Source]

     # File lib/sequel/dataset/actions.rb, line 878
878:     def where_all(cond, &block)
879:       if loader = _where_loader
880:         loader.all(filter_expr(cond), &block)
881:       else
882:         where(cond).all(&block)
883:       end
884:     end

Iterate over all rows matching the given filter condition, yielding each row to the given block. Basically the same as where(cond).each(&block), except it can be optimized to not create an intermediate dataset.

  DB[:table].where_each(id: [1,2,3]){|row| p row}
  # SELECT * FROM table WHERE (id IN (1, 2, 3))

[Source]

     # File lib/sequel/dataset/actions.rb, line 892
892:     def where_each(cond, &block)
893:       if loader = _where_loader
894:         loader.each(filter_expr(cond), &block)
895:       else
896:         where(cond).each(&block)
897:       end
898:     end

Filter the datasets using the given filter condition, then return a single value. This assumes that the dataset has already been setup to limit the selection to a single column. Basically the same as where(cond).single_value, except it can be optimized to not create an intermediate dataset.

  DB[:table].select(:name).where_single_value(id: 1)
  # SELECT name FROM table WHERE (id = 1) LIMIT 1

[Source]

     # File lib/sequel/dataset/actions.rb, line 907
907:     def where_single_value(cond)
908:       if loader = cached_placeholder_literalizer(:_where_single_value_loader) do |pl|
909:           single_value_ds.where(pl.arg)
910:         end
911: 
912:         loader.get(filter_expr(cond))
913:       else
914:         where(cond).single_value
915:       end
916:     end

Run the given SQL and return an array of all rows. If a block is given, each row is yielded to the block after all rows are loaded. See with_sql_each.

[Source]

     # File lib/sequel/dataset/actions.rb, line 920
920:     def with_sql_all(sql, &block)
921:       _all(block){|a| with_sql_each(sql){|r| a << r}}
922:     end

Execute the given SQL and return the number of rows deleted. This exists solely as an optimization, replacing with_sql(sql).delete. It‘s significantly faster as it does not require cloning the current dataset.

[Source]

     # File lib/sequel/dataset/actions.rb, line 927
927:     def with_sql_delete(sql)
928:       execute_dui(sql)
929:     end

Run the given SQL and yield each returned row to the block.

[Source]

     # File lib/sequel/dataset/actions.rb, line 933
933:     def with_sql_each(sql)
934:       if rp = row_proc
935:         _with_sql_dataset.fetch_rows(sql){|r| yield rp.call(r)}
936:       else
937:         _with_sql_dataset.fetch_rows(sql){|r| yield r}
938:       end
939:       self
940:     end

Run the given SQL and return the first row, or nil if no rows were returned. See with_sql_each.

[Source]

     # File lib/sequel/dataset/actions.rb, line 944
944:     def with_sql_first(sql)
945:       with_sql_each(sql){|r| return r}
946:       nil
947:     end

Execute the given SQL and (on most databases) return the primary key of the inserted row.

[Source]

     # File lib/sequel/dataset/actions.rb, line 960
960:     def with_sql_insert(sql)
961:       execute_insert(sql)
962:     end

Run the given SQL and return the first value in the first row, or nil if no rows were returned. For this to make sense, the SQL given should select only a single value. See with_sql_each.

[Source]

     # File lib/sequel/dataset/actions.rb, line 952
952:     def with_sql_single_value(sql)
953:       if r = with_sql_first(sql)
954:         r.each{|_, v| return v}
955:       end
956:     end
with_sql_update(sql)

Alias for with_sql_delete

Protected Instance methods

Internals of import. If primary key values are requested, use separate insert commands for each row. Otherwise, call multi_insert_sql and execute each statement it gives separately.

[Source]

     # File lib/sequel/dataset/actions.rb, line 969
969:     def _import(columns, values, opts)
970:       trans_opts = Hash[opts].merge!(:server=>@opts[:server])
971:       if opts[:return] == :primary_key
972:         @db.transaction(trans_opts){values.map{|v| insert(columns, v)}}
973:       else
974:         stmts = multi_insert_sql(columns, values)
975:         @db.transaction(trans_opts){stmts.each{|st| execute_dui(st)}}
976:       end
977:     end

Return an array of arrays of values given by the symbols in ret_cols.

[Source]

     # File lib/sequel/dataset/actions.rb, line 980
980:     def _select_map_multiple(ret_cols)
981:       map{|r| r.values_at(*ret_cols)}
982:     end

Returns an array of the first value in each row.

[Source]

     # File lib/sequel/dataset/actions.rb, line 985
985:     def _select_map_single
986:       k = nil
987:       map{|r| r[k||=r.keys.first]}
988:     end

A dataset for returning single values from the current dataset.

[Source]

     # File lib/sequel/dataset/actions.rb, line 991
991:     def single_value_ds
992:       clone(:limit=>1).ungraphed.naked
993:     end

8 - Methods related to prepared statements or bound variables

On some adapters, these use native prepared statements and bound variables, on others support is emulated. For details, see the "Prepared Statements/Bound Variables" guide.

Constants

PREPARED_ARG_PLACEHOLDER = LiteralString.new('?').freeze
DEFAULT_PREPARED_STATEMENT_MODULE_METHODS = %w'execute execute_dui execute_insert'.freeze.each(&:freeze)
PREPARED_STATEMENT_MODULE_CODE = { :bind => "opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze, :prepare => "sql = prepared_statement_name".freeze, :prepare_bind => "sql = prepared_statement_name; opts = Hash[opts]; opts[:arguments] = bind_arguments".freeze

Public Instance methods

Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.

  DB[:table].where(id: :$id).bind(id: 1).call(:first)
  # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1)
  # => {:id=>1}

[Source]

     # File lib/sequel/dataset/prepared_statements.rb, line 275
275:     def bind(bind_vars=OPTS)
276:       bind_vars = if bv = @opts[:bind_vars]
277:         Hash[bv].merge!(bind_vars).freeze
278:       else
279:         if bind_vars.frozen?
280:           bind_vars
281:         else
282:           Hash[bind_vars]
283:         end
284:       end
285: 
286:       clone(:bind_vars=>bind_vars)
287:     end

For the given type (:select, :first, :insert, :insert_select, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash passed to insert or update (if one of those types is used), which may contain placeholders.

  DB[:table].where(id: :$id).call(:first, id: 1)
  # SELECT * FROM table WHERE id = ? LIMIT 1 -- (1)
  # => {:id=>1}

[Source]

     # File lib/sequel/dataset/prepared_statements.rb, line 296
296:     def call(type, bind_variables=OPTS, *values, &block)
297:       to_prepared_statement(type, values, :extend=>bound_variable_modules).call(bind_variables, &block)
298:     end

Prepare an SQL statement for later execution. Takes a type similar to call, and the name symbol of the prepared statement.

This returns a clone of the dataset extended with PreparedStatementMethods, which you can call with the hash of bind variables to use. The prepared statement is also stored in the associated Database, where it can be called by name. The following usage is identical:

  ps = DB[:table].where(name: :$name).prepare(:first, :select_by_name)

  ps.call(name: 'Blah')
  # SELECT * FROM table WHERE name = ? -- ('Blah')
  # => {:id=>1, :name=>'Blah'}

  DB.call(:select_by_name, name: 'Blah') # Same thing

[Source]

     # File lib/sequel/dataset/prepared_statements.rb, line 316
316:     def prepare(type, name, *values)
317:       ps = to_prepared_statement(type, values, :name=>name, :extend=>prepared_statement_modules, :no_delayed_evaluations=>true)
318:       ps.prepared_sql
319:       db.set_prepared_statement(name, ps)
320:       ps
321:     end

Protected Instance methods

Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.

[Source]

     # File lib/sequel/dataset/prepared_statements.rb, line 327
327:     def to_prepared_statement(type, values=nil, opts=OPTS)
328:       mods = opts[:extend] || []
329:       mods += [PreparedStatementMethods]
330: 
331:       bind.
332:         clone(:prepared_statement_name=>opts[:name], :prepared_type=>type, :prepared_modify_values=>values, :orig_dataset=>self, :no_cache_sql=>true, :prepared_args=>@opts[:prepared_args]||[], :no_delayed_evaluations=>opts[:no_delayed_evaluations]).
333:         with_extend(*mods)
334:     end

3 - User Methods relating to SQL Creation

These are methods you can call to see what SQL will be generated by the dataset.

Public Instance methods

Returns an EXISTS clause for the dataset as an SQL::PlaceholderLiteralString.

  DB.select(1).where(DB[:items].exists)
  # SELECT 1 WHERE (EXISTS (SELECT * FROM items))

[Source]

    # File lib/sequel/dataset/sql.rb, line 14
14:     def exists
15:       SQL::PlaceholderLiteralString.new(EXISTS, [self], true)
16:     end

Returns an INSERT SQL query string. See insert.

  DB[:items].insert_sql(a: 1)
  # => "INSERT INTO items (a) VALUES (1)"

[Source]

    # File lib/sequel/dataset/sql.rb, line 22
22:     def insert_sql(*values)
23:       return static_sql(@opts[:sql]) if @opts[:sql]
24: 
25:       check_modification_allowed!
26: 
27:       columns = []
28: 
29:       case values.size
30:       when 0
31:         return insert_sql(OPTS)
32:       when 1
33:         case vals = values[0]
34:         when Hash
35:           values = []
36:           vals.each do |k,v| 
37:             columns << k
38:             values << v
39:           end
40:         when Dataset, Array, LiteralString
41:           values = vals
42:         end
43:       when 2
44:         if (v0 = values[0]).is_a?(Array) && ((v1 = values[1]).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString))
45:           columns, values = v0, v1
46:           raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length
47:         end
48:       end
49: 
50:       if values.is_a?(Array) && values.empty? && !insert_supports_empty_values? 
51:         columns, values = insert_empty_columns_values
52:       elsif values.is_a?(Dataset) && hoist_cte?(values) && supports_cte?(:insert)
53:         ds, values = hoist_cte(values)
54:         return ds.clone(:columns=>columns, :values=>values).send(:_insert_sql)
55:       end
56:       clone(:columns=>columns, :values=>values).send(:_insert_sql)
57:     end

Append a literal representation of a value to the given SQL string.

If an unsupported object is given, an Error is raised.

[Source]

     # File lib/sequel/dataset/sql.rb, line 62
 62:     def literal_append(sql, v)
 63:       case v
 64:       when Symbol
 65:         if skip_symbol_cache?
 66:           literal_symbol_append(sql, v)
 67:         else 
 68:           unless l = db.literal_symbol(v)
 69:             l = String.new
 70:             literal_symbol_append(l, v)
 71:             db.literal_symbol_set(v, l)
 72:           end
 73:           sql << l
 74:         end
 75:       when String
 76:         case v
 77:         when LiteralString
 78:           sql << v
 79:         when SQL::Blob
 80:           literal_blob_append(sql, v)
 81:         else
 82:           literal_string_append(sql, v)
 83:         end
 84:       when Integer
 85:         sql << literal_integer(v)
 86:       when Hash
 87:         literal_hash_append(sql, v)
 88:       when SQL::Expression
 89:         literal_expression_append(sql, v)
 90:       when Float
 91:         sql << literal_float(v)
 92:       when BigDecimal
 93:         sql << literal_big_decimal(v)
 94:       when NilClass
 95:         sql << literal_nil
 96:       when TrueClass
 97:         sql << literal_true
 98:       when FalseClass
 99:         sql << literal_false
100:       when Array
101:         literal_array_append(sql, v)
102:       when Time
103:         v.is_a?(SQLTime) ? literal_sqltime_append(sql, v) : literal_time_append(sql, v)
104:       when DateTime
105:         literal_datetime_append(sql, v)
106:       when Date
107:         sql << literal_date(v)
108:       when Dataset
109:         literal_dataset_append(sql, v)
110:       else
111:         literal_other_append(sql, v)
112:       end
113:     end

Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.

[Source]

     # File lib/sequel/dataset/sql.rb, line 118
118:     def multi_insert_sql(columns, values)
119:       case multi_insert_sql_strategy
120:       when :values
121:         sql = LiteralString.new('VALUES ')
122:         expression_list_append(sql, values.map{|r| Array(r)})
123:         [insert_sql(columns, sql)]
124:       when :union
125:         c = false
126:         sql = LiteralString.new
127:         u = ' UNION ALL SELECT '
128:         f = empty_from_sql
129:         values.each do |v|
130:           if c
131:             sql << u
132:           else
133:             sql << 'SELECT '
134:             c = true
135:           end
136:           expression_list_append(sql, v)
137:           sql << f if f
138:         end
139:         [insert_sql(columns, sql)]
140:       else
141:         values.map{|r| insert_sql(columns, r)}
142:       end
143:     end

Same as select_sql, not aliased directly to make subclassing simpler.

[Source]

     # File lib/sequel/dataset/sql.rb, line 146
146:     def sql
147:       select_sql
148:     end

Returns a TRUNCATE SQL query string. See truncate

  DB[:items].truncate_sql # => 'TRUNCATE items'

[Source]

     # File lib/sequel/dataset/sql.rb, line 153
153:     def truncate_sql
154:       if opts[:sql]
155:         static_sql(opts[:sql])
156:       else
157:         check_truncation_allowed!
158:         check_not_limited!(:truncate)
159:         raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] || opts[:having]
160:         t = String.new
161:         source_list_append(t, opts[:from])
162:         _truncate_sql(t)
163:       end
164:     end

Formats an UPDATE statement using the given values. See update.

  DB[:items].update_sql(price: 100, category: 'software')
  # => "UPDATE items SET price = 100, category = 'software'

Raises an Error if the dataset is grouped or includes more than one table.

[Source]

     # File lib/sequel/dataset/sql.rb, line 173
173:     def update_sql(values = OPTS)
174:       return static_sql(opts[:sql]) if opts[:sql]
175:       check_modification_allowed!
176:       check_not_limited!(:update)
177: 
178:       case values
179:       when LiteralString
180:         # nothing
181:       when String
182:         raise Error, "plain string passed to Dataset#update is not supported, use Sequel.lit to use a literal string"
183:       end
184: 
185:       clone(:values=>values).send(:_update_sql)
186:     end

9 - Internal Methods relating to SQL Creation

These methods, while public, are not designed to be used directly by the end user.

Classes and Modules

Class Sequel::Dataset::PlaceholderLiteralizer

Constants

WILDCARD = LiteralString.new('*').freeze
COUNT_OF_ALL_AS_COUNT = SQL::Function.new(:count, WILDCARD).as(:count)
DEFAULT = LiteralString.new('DEFAULT').freeze
EXISTS = ['EXISTS '.freeze].freeze
BITWISE_METHOD_MAP = {:& =>:BITAND, :| => :BITOR, :^ => :BITXOR}.freeze
COUNT_FROM_SELF_OPTS = [:distinct, :group, :sql, :limit, :offset, :compounds].freeze
IS_LITERALS = {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze
QUALIFY_KEYS = [:select, :where, :having, :order, :group].freeze
IS_OPERATORS = ::Sequel::SQL::ComplexExpression::IS_OPERATORS
LIKE_OPERATORS = ::Sequel::SQL::ComplexExpression::LIKE_OPERATORS
N_ARITY_OPERATORS = ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS
TWO_ARITY_OPERATORS = ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS
REGEXP_OPERATORS = ::Sequel::SQL::ComplexExpression::REGEXP_OPERATORS

Public Class methods

Given a type (e.g. select) and an array of clauses, return an array of methods to call to build the SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 195
195:     def self.clause_methods(type, clauses)
196:       clauses.map{|clause| "#{type}_#{clause}_sql""#{type}_#{clause}_sql"}.freeze
197:     end

Define a dataset literalization method for the given type in the given module, using the given clauses.

Arguments:

mod :Module in which to define method
type :Type of SQL literalization method to create, either :select, :insert, :update, or :delete
clauses :array of clauses that make up the SQL query for the type. This can either be a single array of symbols/strings, or it can be an array of pairs, with the first element in each pair being an if/elsif/else code fragment, and the second element in each pair being an array of symbol/strings for the appropriate branch.

[Source]

     # File lib/sequel/dataset/sql.rb, line 209
209:     def self.def_sql_method(mod, type, clauses)
210:       priv = type == :update || type == :insert
211:       cacheable = type == :select || type == :delete
212: 
213:       lines = []
214:       lines << 'private' if priv
215:       lines << "def #{'_' if priv}#{type}_sql"
216:       lines << 'if sql = opts[:sql]; return static_sql(sql) end' unless priv
217:       lines << "if sql = cache_get(:_#{type}_sql); return sql end" if cacheable
218:       lines << 'check_modification_allowed!' << 'check_not_limited!(:delete)' if type == :delete
219:       lines << 'sql = @opts[:append_sql] || sql_string_origin'
220: 
221:       if clauses.all?{|c| c.is_a?(Array)}
222:         clauses.each do |i, cs|
223:           lines << i
224:           lines.concat(clause_methods(type, cs).map{|x| "#{x}(sql)"}) 
225:         end 
226:         lines << 'end'
227:       else
228:         lines.concat(clause_methods(type, clauses).map{|x| "#{x}(sql)"})
229:       end
230: 
231:       lines << "cache_set(:_#{type}_sql, sql) if cache_sql?" if cacheable
232:       lines << 'sql'
233:       lines << 'end'
234: 
235:       mod.class_eval lines.join("\n"), __FILE__, __LINE__
236:     end

Public Instance methods

Append literalization of aliased expression to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 271
271:     def aliased_expression_sql_append(sql, ae)
272:       literal_append(sql, ae.expression)
273:       as_sql_append(sql, ae.alias, ae.columns)
274:     end

Append literalization of array to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 277
277:     def array_sql_append(sql, a)
278:       if a.empty?
279:         sql << '(NULL)'
280:       else
281:         sql << '('
282:         expression_list_append(sql, a)
283:         sql << ')'
284:       end
285:     end

Append literalization of boolean constant to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 288
288:     def boolean_constant_sql_append(sql, constant)
289:       if (constant == true || constant == false) && !supports_where_true?
290:         sql << (constant == true ? '(1 = 1)' : '(1 = 0)')
291:       else
292:         literal_append(sql, constant)
293:       end
294:     end

Append literalization of case expression to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 297
297:     def case_expression_sql_append(sql, ce)
298:       sql << '(CASE'
299:       if ce.expression?
300:         sql << ' '
301:         literal_append(sql, ce.expression)
302:       end
303:       w = " WHEN "
304:       t = " THEN "
305:       ce.conditions.each do |c,r|
306:         sql << w
307:         literal_append(sql, c)
308:         sql << t
309:         literal_append(sql, r)
310:       end
311:       sql << " ELSE "
312:       literal_append(sql, ce.default)
313:       sql << " END)"
314:     end

Append literalization of cast expression to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 317
317:     def cast_sql_append(sql, expr, type)
318:       sql << 'CAST('
319:       literal_append(sql, expr)
320:       sql << ' AS ' << db.cast_type_literal(type).to_s
321:       sql << ')'
322:     end

Append literalization of column all selection to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 325
325:     def column_all_sql_append(sql, ca)
326:       qualified_identifier_sql_append(sql, ca.table, WILDCARD)
327:     end

Append literalization of complex expression to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 330
330:     def complex_expression_sql_append(sql, op, args)
331:       case op
332:       when *IS_OPERATORS
333:         r = args[1]
334:         if r.nil? || supports_is_true?
335:           raise(InvalidOperation, 'Invalid argument used for IS operator') unless val = IS_LITERALS[r]
336:           sql << '('
337:           literal_append(sql, args[0])
338:           sql << ' ' << op.to_s << ' '
339:           sql << val << ')'
340:         elsif op == :IS
341:           complex_expression_sql_append(sql, "=""=", args)
342:         else
343:           complex_expression_sql_append(sql, :OR, [SQL::BooleanExpression.new("!=""!=", *args), SQL::BooleanExpression.new(:IS, args[0], nil)])
344:         end
345:       when :IN, "NOT IN""NOT IN"
346:         cols = args[0]
347:         vals = args[1]
348:         col_array = true if cols.is_a?(Array)
349:         if vals.is_a?(Array)
350:           val_array = true
351:           empty_val_array = vals == []
352:         end
353:         if empty_val_array
354:           literal_append(sql, empty_array_value(op, cols))
355:         elsif col_array
356:           if !supports_multiple_column_in?
357:             if val_array
358:               expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})})
359:               literal_append(sql, op == :IN ? expr : ~expr)
360:             else
361:               old_vals = vals
362:               vals = vals.naked if vals.is_a?(Sequel::Dataset)
363:               vals = vals.to_a
364:               val_cols = old_vals.columns
365:               complex_expression_sql_append(sql, op, [cols, vals.map!{|x| x.values_at(*val_cols)}])
366:             end
367:           else
368:             # If the columns and values are both arrays, use array_sql instead of
369:             # literal so that if values is an array of two element arrays, it
370:             # will be treated as a value list instead of a condition specifier.
371:             sql << '('
372:             literal_append(sql, cols)
373:             sql << ' ' << op.to_s << ' '
374:             if val_array
375:               array_sql_append(sql, vals)
376:             else
377:               literal_append(sql, vals)
378:             end
379:             sql << ')'
380:           end
381:         else
382:           sql << '('
383:           literal_append(sql, cols)
384:           sql << ' ' << op.to_s << ' '
385:           literal_append(sql, vals)
386:           sql << ')'
387:         end
388:       when :LIKE, 'NOT LIKE''NOT LIKE'
389:         sql << '('
390:         literal_append(sql, args[0])
391:         sql << ' ' << op.to_s << ' '
392:         literal_append(sql, args[1])
393:         if requires_like_escape?
394:           sql << " ESCAPE "
395:           literal_append(sql, "\\")
396:         end
397:         sql << ')'
398:       when :ILIKE, 'NOT ILIKE''NOT ILIKE'
399:         complex_expression_sql_append(sql, (op == :ILIKE ? :LIKE : "NOT LIKE""NOT LIKE"), args.map{|v| Sequel.function(:UPPER, v)})
400:       when :**
401:         function_sql_append(sql, Sequel.function(:power, *args))
402:       when *TWO_ARITY_OPERATORS
403:         if REGEXP_OPERATORS.include?(op) && !supports_regexp?
404:           raise InvalidOperation, "Pattern matching via regular expressions is not supported on #{db.database_type}"
405:         end
406:         sql << '('
407:         literal_append(sql, args[0])
408:         sql << ' ' << op.to_s << ' '
409:         literal_append(sql, args[1])
410:         sql << ')'
411:       when *N_ARITY_OPERATORS
412:         sql << '('
413:         c = false
414:         op_str = " #{op} "
415:         args.each do |a|
416:           sql << op_str if c
417:           literal_append(sql, a)
418:           c ||= true
419:         end
420:         sql << ')'
421:       when :NOT
422:         sql << 'NOT '
423:         literal_append(sql, args[0])
424:       when :NOOP
425:         literal_append(sql, args[0])
426:       when 'B~''B~'
427:         sql << '~'
428:         literal_append(sql, args[0])
429:       when :extract
430:         sql << 'extract(' << args[0].to_s << ' FROM '
431:         literal_append(sql, args[1])
432:         sql << ')'
433:       else
434:         raise(InvalidOperation, "invalid operator #{op}")
435:       end
436:     end

Append literalization of constant to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 439
439:     def constant_sql_append(sql, constant)
440:       sql << constant.to_s
441:     end

Append literalization of delayed evaluation to SQL string, causing the delayed evaluation proc to be evaluated.

[Source]

     # File lib/sequel/dataset/sql.rb, line 445
445:     def delayed_evaluation_sql_append(sql, delay)
446:       # Delayed evaluations are used specifically so the SQL
447:       # can differ in subsequent calls, so we definitely don't
448:       # want to cache the sql in this case.
449:       disable_sql_caching!
450: 
451:       if recorder = @opts[:placeholder_literalizer]
452:         recorder.use(sql, lambda{delay.call(self)}, nil)
453:       else
454:         literal_append(sql, delay.call(self))
455:       end
456:     end

Append literalization of function call to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 459
459:     def function_sql_append(sql, f)
460:       name = f.name
461:       opts = f.opts
462: 
463:       if opts[:emulate]
464:         if emulate_function?(name)
465:           emulate_function_sql_append(sql, f)
466:           return
467:         end
468: 
469:         name = native_function_name(name) 
470:       end
471: 
472:       sql << 'LATERAL ' if opts[:lateral]
473: 
474:       case name
475:       when SQL::Identifier
476:         if supports_quoted_function_names? && opts[:quoted]
477:           literal_append(sql, name)
478:         else
479:           sql << name.value.to_s
480:         end
481:       when SQL::QualifiedIdentifier
482:         if supports_quoted_function_names? && opts[:quoted] != false
483:           literal_append(sql, name)
484:         else
485:           sql << split_qualifiers(name).join('.')
486:         end
487:       else
488:         if supports_quoted_function_names? && opts[:quoted]
489:           quote_identifier_append(sql, name)
490:         else
491:           sql << name.to_s
492:         end
493:       end
494: 
495:       sql << '('
496:       if opts[:*]
497:         sql << '*'
498:       else
499:         sql << "DISTINCT " if opts[:distinct]
500:         expression_list_append(sql, f.args)
501:         if order = opts[:order]
502:           sql << " ORDER BY "
503:           expression_list_append(sql, order)
504:         end
505:       end
506:       sql << ')'
507: 
508:       if group = opts[:within_group]
509:         sql << " WITHIN GROUP (ORDER BY "
510:         expression_list_append(sql, group)
511:         sql << ')'
512:       end
513: 
514:       if filter = opts[:filter]
515:         sql << " FILTER (WHERE "
516:         literal_append(sql, filter_expr(filter, &opts[:filter_block]))
517:         sql << ')'
518:       end
519: 
520:       if window = opts[:over]
521:         sql << ' OVER '
522:         window_sql_append(sql, window.opts)
523:       end
524: 
525:       if opts[:with_ordinality]
526:         sql << " WITH ORDINALITY"
527:       end
528:     end

Append literalization of JOIN clause without ON or USING to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 531
531:     def join_clause_sql_append(sql, jc)
532:       table = jc.table
533:       table_alias = jc.table_alias
534:       table_alias = nil if table == table_alias && !jc.column_aliases
535:       sql << ' ' << join_type_sql(jc.join_type) << ' '
536:       identifier_append(sql, table)
537:       as_sql_append(sql, table_alias, jc.column_aliases) if table_alias
538:     end

Append literalization of JOIN ON clause to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 541
541:     def join_on_clause_sql_append(sql, jc)
542:       join_clause_sql_append(sql, jc)
543:       sql << ' ON '
544:       literal_append(sql, filter_expr(jc.on))
545:     end

Append literalization of JOIN USING clause to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 548
548:     def join_using_clause_sql_append(sql, jc)
549:       join_clause_sql_append(sql, jc)
550:       sql << ' USING ('
551:       column_list_append(sql, jc.using)
552:       sql << ')'
553:     end

Append literalization of negative boolean constant to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 556
556:     def negative_boolean_constant_sql_append(sql, constant)
557:       sql << 'NOT '
558:       boolean_constant_sql_append(sql, constant)
559:     end

Append literalization of ordered expression to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 562
562:     def ordered_expression_sql_append(sql, oe)
563:       literal_append(sql, oe.expression)
564:       sql << (oe.descending ? ' DESC' : ' ASC')
565:       case oe.nulls
566:       when :first
567:         sql << " NULLS FIRST"
568:       when :last
569:         sql << " NULLS LAST"
570:       end
571:     end

Append literalization of placeholder literal string to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 574
574:     def placeholder_literal_string_sql_append(sql, pls)
575:       args = pls.args
576:       str = pls.str
577:       sql << '(' if pls.parens
578:       if args.is_a?(Hash)
579:         if args.empty?
580:           sql << str
581:         else
582:           re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/
583:           while true
584:             previous, q, str = str.partition(re)
585:             sql << previous
586:             literal_append(sql, args[($1||q[1..-1].to_s).to_sym]) unless q.empty?
587:             break if str.empty?
588:           end
589:         end
590:       elsif str.is_a?(Array)
591:         len = args.length
592:         str.each_with_index do |s, i|
593:           sql << s
594:           literal_append(sql, args[i]) unless i == len
595:         end
596:         unless str.length == args.length || str.length == args.length + 1
597:           raise Error, "Mismatched number of placeholders (#{str.length}) and placeholder arguments (#{args.length}) when using placeholder array"
598:         end
599:       else
600:         i = -1
601:         match_len = args.length - 1
602:         while true
603:           previous, q, str = str.partition('?')
604:           sql << previous
605:           literal_append(sql, args.at(i+=1)) unless q.empty?
606:           if str.empty?
607:             unless i == match_len
608:               raise Error, "Mismatched number of placeholders (#{i+1}) and placeholder arguments (#{args.length}) when using placeholder string"
609:             end
610:             break
611:           end
612:         end
613:       end
614:       sql << ')' if pls.parens
615:     end

Append literalization of qualified identifier to SQL string. If 3 arguments are given, the 2nd should be the table/qualifier and the third should be column/qualified. If 2 arguments are given, the 2nd should be an SQL::QualifiedIdentifier.

[Source]

     # File lib/sequel/dataset/sql.rb, line 620
620:     def qualified_identifier_sql_append(sql, table, column=(c = table.column; table = table.table; c))
621:       identifier_append(sql, table)
622:       sql << '.'
623:       identifier_append(sql, column)
624:     end

Append literalization of unqualified identifier to SQL string. Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.

[Source]

     # File lib/sequel/dataset/sql.rb, line 630
630:     def quote_identifier_append(sql, name)
631:       if name.is_a?(LiteralString)
632:         sql << name
633:       else
634:         name = name.value if name.is_a?(SQL::Identifier)
635:         name = input_identifier(name)
636:         if quote_identifiers?
637:           quoted_identifier_append(sql, name)
638:         else
639:           sql << name
640:         end
641:       end
642:     end

Append literalization of identifier or unqualified identifier to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 645
645:     def quote_schema_table_append(sql, table)
646:       schema, table = schema_and_table(table)
647:       if schema
648:         quote_identifier_append(sql, schema)
649:         sql << '.'
650:       end
651:       quote_identifier_append(sql, table)
652:     end

Append literalization of quoted identifier to SQL string. This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).

[Source]

     # File lib/sequel/dataset/sql.rb, line 658
658:     def quoted_identifier_append(sql, name)
659:       sql << '"' << name.to_s.gsub('"', '""') << '"'
660:     end

Split the schema information from the table, returning two strings, one for the schema and one for the table. The returned schema may be nil, but the table will always have a string value.

Note that this function does not handle tables with more than one level of qualification (e.g. database.schema.table on Microsoft SQL Server).

[Source]

     # File lib/sequel/dataset/sql.rb, line 669
669:     def schema_and_table(table_name, sch=nil)
670:       sch = sch.to_s if sch
671:       case table_name
672:       when Symbol
673:         s, t, _ = split_symbol(table_name)
674:         [s||sch, t]
675:       when SQL::QualifiedIdentifier
676:         [table_name.table.to_s, table_name.column.to_s]
677:       when SQL::Identifier
678:         [sch, table_name.value.to_s]
679:       when String
680:         [sch, table_name]
681:       else
682:         raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String'
683:       end
684:     end

Splits table_name into an array of strings.

  ds.split_qualifiers(:s) # ['s']
  ds.split_qualifiers(Sequel[:t][:s]) # ['t', 's']
  ds.split_qualifiers(Sequel[:d][:t][:s]) # ['d', 't', 's']
  ds.split_qualifiers(Sequel.qualify(Sequel[:h][:d], Sequel[:t][:s])) # ['h', 'd', 't', 's']

[Source]

     # File lib/sequel/dataset/sql.rb, line 692
692:     def split_qualifiers(table_name, *args)
693:       case table_name
694:       when SQL::QualifiedIdentifier
695:         split_qualifiers(table_name.table, nil) + split_qualifiers(table_name.column, nil)
696:       else
697:         sch, table = schema_and_table(table_name, *args)
698:         sch ? [sch, table] : [table]
699:       end
700:     end

Append literalization of subscripts (SQL array accesses) to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 703
703:     def subscript_sql_append(sql, s)
704:       literal_append(sql, s.expression)
705:       sql << '['
706:       sub = s.sub
707:       if sub.length == 1 && (range = sub.first).is_a?(Range)
708:         literal_append(sql, range.begin)
709:         sql << ':'
710:         e = range.end
711:         e -= 1 if range.exclude_end? && e.is_a?(Integer)
712:         literal_append(sql, e)
713:       else
714:         expression_list_append(sql, s.sub)
715:       end
716:       sql << ']'
717:     end

Append literalization of windows (for window functions) to SQL string.

[Source]

     # File lib/sequel/dataset/sql.rb, line 720
720:     def window_sql_append(sql, opts)
721:       raise(Error, 'This dataset does not support window functions') unless supports_window_functions?
722:       sql << '('
723:       window, part, order, frame = opts.values_at(:window, :partition, :order, :frame)
724:       space = false
725:       space_s = ' '
726:       if window
727:         literal_append(sql, window)
728:         space = true
729:       end
730:       if part
731:         sql << space_s if space
732:         sql << "PARTITION BY "
733:         expression_list_append(sql, Array(part))
734:         space = true
735:       end
736:       if order
737:         sql << space_s if space
738:         sql << "ORDER BY "
739:         expression_list_append(sql, Array(order))
740:         space = true
741:       end
742:       case frame
743:         when nil
744:           # nothing
745:         when :all
746:           sql << space_s if space
747:           sql << "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING"
748:         when :rows
749:           sql << space_s if space
750:           sql << "ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW"
751:         when String
752:           sql << space_s if space
753:           sql << frame
754:         else
755:           raise Error, "invalid window frame clause, should be :all, :rows, a string, or nil"
756:       end
757:       sql << ')'
758:     end

Protected Instance methods

Return a from_self dataset if an order or limit is specified, so it works as expected with UNION, EXCEPT, and INTERSECT clauses.

[Source]

     # File lib/sequel/dataset/sql.rb, line 764
764:     def compound_from_self
765:       (@opts[:sql] || @opts[:limit] || @opts[:order] || @opts[:offset]) ? from_self : self
766:     end

5 - Methods related to dataset graphing

Dataset graphing automatically creates unique aliases columns in join tables that overlap with already selected column aliases. All of these methods return modified copies of the receiver.

Public Instance methods

Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list (the equivalent of select_append when graphing). See set_graph_aliases.

  DB[:table].add_graph_aliases(some_alias: [:table, :column])
  # SELECT ..., table.column AS some_alias

[Source]

    # File lib/sequel/dataset/graph.rb, line 18
18:     def add_graph_aliases(graph_aliases)
19:       graph = opts[:graph]
20:       unless (graph && (ga = graph[:column_aliases]))
21:         raise Error, "cannot call add_graph_aliases on a dataset that has not been called with graph or set_graph_aliases"
22:       end
23:       columns, graph_aliases = graph_alias_columns(graph_aliases)
24:       select_append(*columns).clone(:graph => Hash[graph].merge!(:column_aliases=>Hash[ga].merge!(graph_aliases).freeze).freeze)
25:     end

Similar to Dataset#join_table, but uses unambiguous aliases for selected columns and keeps metadata about the aliases for use in other methods.

Arguments:

dataset :Can be a symbol (specifying a table), another dataset, or an SQL::Identifier, SQL::QualifiedIdentifier, or SQL::AliasedExpression.
join_conditions :Any condition(s) allowed by join_table.
block :A block that is passed to join_table.

Options:

:from_self_alias :The alias to use when the receiver is not a graphed dataset but it contains multiple FROM tables or a JOIN. In this case, the receiver is wrapped in a from_self before graphing, and this option determines the alias to use.
:implicit_qualifier :The qualifier of implicit conditions, see join_table.
:join_only :Only join the tables, do not change the selected columns.
:join_type :The type of join to use (passed to join_table). Defaults to :left_outer.
:qualify:The type of qualification to do, see join_table.
:select :An array of columns to select. When not used, selects all columns in the given dataset. When set to false, selects no columns and is like simply joining the tables, though graph keeps some metadata about the join that makes it important to use graph instead of join_table.
:table_alias :The alias to use for the table. If not specified, doesn‘t alias the table. You will get an error if the alias (or table) name is used more than once.

[Source]

     # File lib/sequel/dataset/graph.rb, line 53
 53:     def graph(dataset, join_conditions = nil, options = OPTS, &block)
 54:       # Allow the use of a dataset or symbol as the first argument
 55:       # Find the table name/dataset based on the argument
 56:       table_alias = options[:table_alias]
 57:       table = dataset
 58:       create_dataset = true
 59: 
 60:       case dataset
 61:       when Symbol
 62:         # let alias be the same as the table name (sans any optional schema)
 63:         # unless alias explicitly given in the symbol using ___ notation and symbol splitting is enabled
 64:         table_alias ||= split_symbol(table).compact.last
 65:       when Dataset
 66:         if dataset.simple_select_all?
 67:           table = dataset.opts[:from].first
 68:           table_alias ||= table
 69:         else
 70:           table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1)
 71:         end
 72:         create_dataset = false
 73:       when SQL::Identifier
 74:         table_alias ||= table.value
 75:       when SQL::QualifiedIdentifier
 76:         table_alias ||= split_qualifiers(table).last
 77:       when SQL::AliasedExpression
 78:         return graph(table.expression, join_conditions, {:table_alias=>table.alias}.merge!(options), &block)
 79:       else
 80:         raise Error, "The dataset argument should be a symbol or dataset"
 81:       end
 82:       table_alias = table_alias.to_sym
 83: 
 84:       if create_dataset
 85:         dataset = db.from(table)
 86:       end
 87: 
 88:       # Raise Sequel::Error with explanation that the table alias has been used
 89:       raise_alias_error = lambda do
 90:         raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \
 91:           "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 
 92:       end
 93: 
 94:       # Only allow table aliases that haven't been used
 95:       raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias)
 96:       
 97:       table_alias_qualifier = qualifier_from_alias_symbol(table_alias, table)
 98:       implicit_qualifier = options[:implicit_qualifier]
 99:       ds = self
100: 
101:       # Use a from_self if this is already a joined table (or from_self specifically disabled for graphs)
102:       if (@opts[:graph_from_self] != false && !@opts[:graph] && joined_dataset?)
103:         from_selfed = true
104:         implicit_qualifier = options[:from_self_alias] || first_source
105:         ds = ds.from_self(:alias=>implicit_qualifier)
106:       end
107:       
108:       # Join the table early in order to avoid cloning the dataset twice
109:       ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias_qualifier, :implicit_qualifier=>implicit_qualifier, :qualify=>options[:qualify], &block)
110: 
111:       return ds if options[:join_only]
112: 
113:       opts = ds.opts
114: 
115:       # Whether to include the table in the result set
116:       add_table = options[:select] == false ? false : true
117: 
118:       if graph = opts[:graph]
119:         graph = graph.dup
120:         select = opts[:select].dup
121:         [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k] = graph[k].dup}
122:       else
123:         # Setup the initial graph data structure if it doesn't exist
124:         qualifier = ds.first_source_alias
125:         master = alias_symbol(qualifier)
126:         raise_alias_error.call if master == table_alias
127: 
128:         # Master hash storing all .graph related information
129:         graph = {}
130: 
131:         # Associates column aliases back to tables and columns
132:         column_aliases = graph[:column_aliases] = {}
133: 
134:         # Associates table alias (the master is never aliased)
135:         table_aliases = graph[:table_aliases] = {master=>self}
136: 
137:         # Keep track of the alias numbers used
138:         ca_num = graph[:column_alias_num] = Hash.new(0)
139: 
140:         # All columns in the master table are never
141:         # aliased, but are not included if set_graph_aliases
142:         # has been used.
143:         if (select = @opts[:select]) && !select.empty? && !(select.length == 1 && (select.first.is_a?(SQL::ColumnAll)))
144:           select = select.map do |sel|
145:             raise Error, "can't figure out alias to use for graphing for #{sel.inspect}" unless column = _hash_key_symbol(sel)
146:             column_aliases[column] = [master, column]
147:             if from_selfed
148:               # Initial dataset was wrapped in subselect, selected all
149:               # columns in the subselect, qualified by the subselect alias.
150:               Sequel.qualify(qualifier, Sequel.identifier(column))
151:             else
152:               # Initial dataset not wrapped in subslect, just make
153:               # sure columns are qualified in some way.
154:               qualified_expression(sel, qualifier)
155:             end
156:           end
157:         else
158:           select = columns.map do |column|
159:             column_aliases[column] = [master, column]
160:             SQL::QualifiedIdentifier.new(qualifier, column)
161:           end
162:         end
163:       end
164: 
165:       # Add the table alias to the list of aliases
166:       # Even if it isn't been used in the result set,
167:       # we add a key for it with a nil value so we can check if it
168:       # is used more than once
169:       table_aliases = graph[:table_aliases]
170:       table_aliases[table_alias] = add_table ? dataset : nil
171: 
172:       # Add the columns to the selection unless we are ignoring them
173:       if add_table
174:         column_aliases = graph[:column_aliases]
175:         ca_num = graph[:column_alias_num]
176:         # Which columns to add to the result set
177:         cols = options[:select] || dataset.columns
178:         # If the column hasn't been used yet, don't alias it.
179:         # If it has been used, try table_column.
180:         # If that has been used, try table_column_N 
181:         # using the next value of N that we know hasn't been
182:         # used
183:         cols.each do |column|
184:           col_alias, identifier = if column_aliases[column]
185:             column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}"
186:             if column_aliases[column_alias]
187:               column_alias_num = ca_num[column_alias]
188:               column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 
189:               ca_num[column_alias] += 1
190:             end
191:             [column_alias, SQL::AliasedExpression.new(SQL::QualifiedIdentifier.new(table_alias_qualifier, column), column_alias)]
192:           else
193:             ident = SQL::QualifiedIdentifier.new(table_alias_qualifier, column)
194:             [column, ident]
195:           end
196:           column_aliases[col_alias] = [table_alias, column].freeze
197:           select.push(identifier)
198:         end
199:       end
200:       [:column_aliases, :table_aliases, :column_alias_num].each{|k| graph[k].freeze}
201:       ds = ds.clone(:graph=>graph.freeze)
202:       ds.select(*select)
203:     end

This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of select for a graphed dataset, and must be used instead of select whenever graphing is used.

graph_aliases should be a hash with keys being symbols of column aliases, and values being either symbols or arrays with one to three elements. If the value is a symbol, it is assumed to be the same as a one element array containing that symbol. The first element of the array should be the table alias symbol. The second should be the actual column name symbol. If the array only has a single element the column name symbol will be assumed to be the same as the corresponding hash key. If the array has a third element, it is used as the value returned, instead of table_alias.column_name.

  DB[:artists].graph(:albums, :artist_id: :id).
    set_graph_aliases(name: :artists,
                      album_name: [:albums, :name],
                      forty_two: [:albums, :fourtwo, 42]).first
  # SELECT artists.name, albums.name AS album_name, 42 AS forty_two ...

[Source]

     # File lib/sequel/dataset/graph.rb, line 228
228:     def set_graph_aliases(graph_aliases)
229:       columns, graph_aliases = graph_alias_columns(graph_aliases)
230:       if graph = opts[:graph]
231:         select(*columns).clone(:graph => Hash[graph].merge!(:column_aliases=>graph_aliases.freeze).freeze)
232:       else
233:         raise Error, "cannot call #set_graph_aliases on an ungraphed dataset"
234:       end
235:     end

Remove the splitting of results into subhashes, and all metadata related to the current graph (if any).

[Source]

     # File lib/sequel/dataset/graph.rb, line 239
239:     def ungraphed
240:       clone(:graph=>nil)
241:     end

1 - Methods that return modified datasets

These methods all return modified copies of the receiver.

Constants

EXTENSIONS = {}   Hash of extension name symbols to callable objects to load the extension into the Dataset object (usually by extending it with a module defined in the extension).
EMPTY_ARRAY = [].freeze
COLUMN_CHANGE_OPTS = [:select, :sql, :from, :join].freeze   The dataset options that require the removal of cached columns if changed.
NON_SQL_OPTIONS = [:server, :graph, :row_proc, :quote_identifiers, :skip_symbol_cache].freeze   Which options don‘t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table.
CONDITIONED_JOIN_TYPES = [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left].freeze   These symbols have _join methods created (e.g. inner_join) that call join_table with the symbol, passing along the arguments and block from the method call.
UNCONDITIONED_JOIN_TYPES = [:natural, :natural_left, :natural_right, :natural_full, :cross].freeze   These symbols have _join methods created (e.g. natural_join). They accept a table argument and options hash which is passed to join_table, and they raise an error if called with a block.
JOIN_METHODS = ((CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table]).freeze   All methods that return modified datasets with a joined table added.
QUERY_METHODS = ((<<-METHS).split.map(&:to_sym) + JOIN_METHODS).freeze add_graph_aliases distinct except exclude exclude_having filter for_update from from_self graph grep group group_and_count group_append group_by having intersect invert limit lock_style naked offset or order order_append order_by order_more order_prepend qualify reverse reverse_order select select_all select_append select_group select_more server set_graph_aliases unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql METHS ).split.map(&:to_sym) + JOIN_METHODS).freeze   Methods that return modified datasets
SIMPLE_SELECT_ALL_ALLOWED_FROM = [Symbol, SQL::Identifier, SQL::QualifiedIdentifier].freeze   From types allowed to be considered a simple_select_all

External Aliases

clone -> _clone
  Save original clone implementation, as some other methods need to call it internally.

Public Class methods

Register an extension callback for Dataset objects. ext should be the extension name symbol, and mod should either be a Module that the dataset is extended with, or a callable object called with the database object. If mod is not provided, a block can be provided and is treated as the mod object.

If mod is a module, this also registers a Database extension that will extend all of the database‘s datasets.

[Source]

    # File lib/sequel/dataset/query.rb, line 56
56:     def self.register_extension(ext, mod=nil, &block)
57:       if mod
58:         raise(Error, "cannot provide both mod and block to Dataset.register_extension") if block
59:         if mod.is_a?(Module)
60:           block = proc{|ds| ds.extend(mod)}
61:           Sequel::Database.register_extension(ext){|db| db.extend_datasets(mod)}
62:         else
63:           block = mod
64:         end
65:       end
66:       Sequel.synchronize{EXTENSIONS[ext] = block}
67:     end

Public Instance methods

Returns a new clone of the dataset with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted. This method should generally not be called directly by user code.

[Source]

    # File lib/sequel/dataset/query.rb, line 85
85:       def clone(opts = (return self; nil))
86:         c = super(:freeze=>false)
87:         c.opts.merge!(opts)
88:         unless opts.each_key{|o| break if COLUMN_CHANGE_OPTS.include?(o)}
89:           c.clear_columns_cache
90:         end
91:         c.freeze
92:       end

Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. If a block is given, it is treated as a virtual row block, similar to where. Raises an error if arguments are given and DISTINCT ON is not supported.

 DB[:items].distinct # SQL: SELECT DISTINCT * FROM items
 DB[:items].order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
 DB[:items].order(:id).distinct{func(:id)} # SQL: SELECT DISTINCT ON (func(id)) * FROM items ORDER BY id

There is support for emualting the DISTINCT ON support in MySQL, but it does not support the ORDER of the dataset, and also doesn‘t work in many cases if the ONLY_FULL_GROUP_BY sql_mode is used, which is the default on MySQL 5.7.5+.

[Source]

     # File lib/sequel/dataset/query.rb, line 122
122:     def distinct(*args, &block)
123:       virtual_row_columns(args, block)
124:       if args.empty?
125:         cached_dataset(:_distinct_ds){clone(:distinct => EMPTY_ARRAY)}
126:       else
127:         raise(InvalidOperation, "DISTINCT ON not supported") unless supports_distinct_on?
128:         clone(:distinct => args.freeze)
129:       end
130:     end

Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:

:alias :Use the given value as the from_self alias
:all :Set to true to use EXCEPT ALL instead of EXCEPT, so duplicate rows can occur
:from_self :Set to false to not wrap the returned dataset in a from_self, use with care.
  DB[:items].except(DB[:other_items])
  # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS t1

  DB[:items].except(DB[:other_items], all: true, from_self: false)
  # SELECT * FROM items EXCEPT ALL SELECT * FROM other_items

  DB[:items].except(DB[:other_items], alias: :i)
  # SELECT * FROM (SELECT * FROM items EXCEPT SELECT * FROM other_items) AS i

[Source]

     # File lib/sequel/dataset/query.rb, line 149
149:     def except(dataset, opts=OPTS)
150:       raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except?
151:       raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all?
152:       compound_clone(:except, dataset, opts)
153:     end

Performs the inverse of Dataset#where. Note that if you have multiple filter conditions, this is not the same as a negation of all conditions.

  DB[:items].exclude(category: 'software')
  # SELECT * FROM items WHERE (category != 'software')

  DB[:items].exclude(category: 'software', id: 3)
  # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))

Also note that SQL uses 3-valued boolean logic (true, false, NULL), so the inverse of a true condition is a false condition, and will still not match rows that were NULL originally. If you take the earlier example:

  DB[:items].exclude(category: 'software')
  # SELECT * FROM items WHERE (category != 'software')

Note that this does not match rows where category is NULL. This is because NULL is an unknown value, and you do not know whether or not the NULL category is software. You can explicitly specify how to handle NULL values if you want:

  DB[:items].exclude(Sequel.~(category: nil) & {category: 'software'})
  # SELECT * FROM items WHERE ((category IS NULL) OR (category != 'software'))

[Source]

     # File lib/sequel/dataset/query.rb, line 179
179:     def exclude(*cond, &block)
180:       add_filter(:where, cond, true, &block)
181:     end

Inverts the given conditions and adds them to the HAVING clause.

  DB[:items].select_group(:name).exclude_having{count(name) < 2}
  # SELECT name FROM items GROUP BY name HAVING (count(name) >= 2)

See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.

[Source]

     # File lib/sequel/dataset/query.rb, line 190
190:     def exclude_having(*cond, &block)
191:       add_filter(:having, cond, true, &block)
192:     end

Return a clone of the dataset loaded with the given dataset extensions. If no related extension file exists or the extension does not have specific support for Dataset objects, an Error will be raised.

[Source]

     # File lib/sequel/dataset/query.rb, line 198
198:       def extension(*a)
199:         c = _clone(:freeze=>false)
200:         c.send(:_extension!, a)
201:         c.freeze
202:       end

Alias for where.

[Source]

     # File lib/sequel/dataset/query.rb, line 214
214:     def filter(*cond, &block)
215:       where(*cond, &block)
216:     end

Returns a cloned dataset with a :update lock style.

  DB[:table].for_update # SELECT * FROM table FOR UPDATE

[Source]

     # File lib/sequel/dataset/query.rb, line 221
221:     def for_update
222:       cached_dataset(:_for_update_ds){lock_style(:update)}
223:     end

Returns a copy of the dataset with the source changed. If no source is given, removes all tables. If multiple sources are given, it is the same as using a CROSS JOIN (cartesian product) between all tables. If a block is given, it is treated as a virtual row block, similar to where.

  DB[:items].from # SQL: SELECT *
  DB[:items].from(:blah) # SQL: SELECT * FROM blah
  DB[:items].from(:blah, :foo) # SQL: SELECT * FROM blah, foo
  DB[:items].from{fun(arg)} # SQL: SELECT * FROM fun(arg)

[Source]

     # File lib/sequel/dataset/query.rb, line 234
234:     def from(*source, &block)
235:       virtual_row_columns(source, block)
236:       table_alias_num = 0
237:       ctes = nil
238:       source.map! do |s|
239:         case s
240:         when Dataset
241:           if hoist_cte?(s)
242:             ctes ||= []
243:             ctes += s.opts[:with]
244:             s = s.clone(:with=>nil)
245:           end
246:           SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1))
247:         when Symbol
248:           sch, table, aliaz = split_symbol(s)
249:           if aliaz
250:             s = sch ? SQL::QualifiedIdentifier.new(sch, table) : SQL::Identifier.new(table)
251:             SQL::AliasedExpression.new(s, aliaz.to_sym)
252:           else
253:             s
254:           end
255:         else
256:           s
257:         end
258:       end
259:       o = {:from=>source.empty? ? nil : source.freeze}
260:       o[:with] = ((opts[:with] || EMPTY_ARRAY) + ctes).freeze if ctes
261:       o[:num_dataset_sources] = table_alias_num if table_alias_num > 0
262:       clone(o)
263:     end

Returns a dataset selecting from the current dataset. Options:

:alias :Controls the alias of the table
:column_aliases :Also aliases columns, using derived column lists. Only used in conjunction with :alias.
  ds = DB[:items].order(:name).select(:id, :name)
  # SELECT id,name FROM items ORDER BY name

  ds.from_self
  # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1

  ds.from_self(alias: :foo)
  # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo

  ds.from_self(alias: :foo, column_aliases: [:c1, :c2])
  # SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo(c1, c2)

[Source]

     # File lib/sequel/dataset/query.rb, line 282
282:     def from_self(opts=OPTS)
283:       fs = {}
284:       @opts.keys.each{|k| fs[k] = nil unless non_sql_option?(k)}
285:       pr = proc do
286:         c = clone(fs).from(opts[:alias] ? as(opts[:alias], opts[:column_aliases]) : self)
287:         if cols = _columns
288:           c.send(:columns=, cols)
289:         end
290:         c
291:       end
292: 
293:       cache ? cached_dataset(:_from_self_ds, &pr) : pr.call
294:     end

Match any of the columns to any of the patterns. The terms can be strings (which use LIKE) or regular expressions if the database supports that. Note that the total number of pattern matches will be Array(columns).length * Array(terms).length, which could cause performance issues.

Options (all are boolean):

:all_columns :All columns must be matched to any of the given patterns.
:all_patterns :All patterns must match at least one of the columns.
:case_insensitive :Use a case insensitive pattern match (the default is case sensitive if the database supports it).

If both :all_columns and :all_patterns are true, all columns must match all patterns.

Examples:

  dataset.grep(:a, '%test%')
  # SELECT * FROM items WHERE (a LIKE '%test%' ESCAPE '\')

  dataset.grep([:a, :b], %w'%test% foo')
  # SELECT * FROM items WHERE ((a LIKE '%test%' ESCAPE '\') OR (a LIKE 'foo' ESCAPE '\')
  #   OR (b LIKE '%test%' ESCAPE '\') OR (b LIKE 'foo' ESCAPE '\'))

  dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true)
  # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (b LIKE '%foo%' ESCAPE '\'))
  #   AND ((a LIKE '%bar%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))

  dataset.grep([:a, :b], %w'%foo% %bar%', all_columns: true)
  # SELECT * FROM a WHERE (((a LIKE '%foo%' ESCAPE '\') OR (a LIKE '%bar%' ESCAPE '\'))
  #   AND ((b LIKE '%foo%' ESCAPE '\') OR (b LIKE '%bar%' ESCAPE '\')))

  dataset.grep([:a, :b], %w'%foo% %bar%', all_patterns: true, all_columns: true)
  # SELECT * FROM a WHERE ((a LIKE '%foo%' ESCAPE '\') AND (b LIKE '%foo%' ESCAPE '\')
  #   AND (a LIKE '%bar%' ESCAPE '\') AND (b LIKE '%bar%' ESCAPE '\'))

[Source]

     # File lib/sequel/dataset/query.rb, line 331
331:     def grep(columns, patterns, opts=OPTS)
332:       if opts[:all_patterns]
333:         conds = Array(patterns).map do |pat|
334:           SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *Array(columns).map{|c| SQL::StringExpression.like(c, pat, opts)})
335:         end
336:         where(SQL::BooleanExpression.new(opts[:all_patterns] ? :AND : :OR, *conds))
337:       else
338:         conds = Array(columns).map do |c|
339:           SQL::BooleanExpression.new(:OR, *Array(patterns).map{|pat| SQL::StringExpression.like(c, pat, opts)})
340:         end
341:         where(SQL::BooleanExpression.new(opts[:all_columns] ? :AND : :OR, *conds))
342:       end
343:     end

Returns a copy of the dataset with the results grouped by the value of the given columns. If a block is given, it is treated as a virtual row block, similar to where.

  DB[:items].group(:id) # SELECT * FROM items GROUP BY id
  DB[:items].group(:id, :name) # SELECT * FROM items GROUP BY id, name
  DB[:items].group{[a, sum(b)]} # SELECT * FROM items GROUP BY a, sum(b)

[Source]

     # File lib/sequel/dataset/query.rb, line 352
352:     def group(*columns, &block)
353:       virtual_row_columns(columns, block)
354:       clone(:group => (columns.compact.empty? ? nil : columns.freeze))
355:     end

Returns a dataset grouped by the given column with count by group. Column aliases may be supplied, and will be included in the select clause. If a block is given, it is treated as a virtual row block, similar to where.

Examples:

  DB[:items].group_and_count(:name).all
  # SELECT name, count(*) AS count FROM items GROUP BY name
  # => [{:name=>'a', :count=>1}, ...]

  DB[:items].group_and_count(:first_name, :last_name).all
  # SELECT first_name, last_name, count(*) AS count FROM items GROUP BY first_name, last_name
  # => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...]

  DB[:items].group_and_count(Sequel[:first_name].as(:name)).all
  # SELECT first_name AS name, count(*) AS count FROM items GROUP BY first_name
  # => [{:name=>'a', :count=>1}, ...]

  DB[:items].group_and_count{substr(:first_name, 1, 1).as(:initial)}.all
  # SELECT substr(first_name, 1, 1) AS initial, count(*) AS count FROM items GROUP BY substr(first_name, 1, 1)
  # => [{:initial=>'a', :count=>1}, ...]

[Source]

     # File lib/sequel/dataset/query.rb, line 383
383:     def group_and_count(*columns, &block)
384:       select_group(*columns, &block).select_append(COUNT_OF_ALL_AS_COUNT)
385:     end

Returns a copy of the dataset with the given columns added to the list of existing columns to group on. If no existing columns are present this method simply sets the columns as the initial ones to group on.

  DB[:items].group_append(:b) # SELECT * FROM items GROUP BY b
  DB[:items].group(:a).group_append(:b) # SELECT * FROM items GROUP BY a, b

[Source]

     # File lib/sequel/dataset/query.rb, line 393
393:     def group_append(*columns, &block)
394:       columns = @opts[:group] + columns if @opts[:group]
395:       group(*columns, &block)
396:     end

Alias of group

[Source]

     # File lib/sequel/dataset/query.rb, line 358
358:     def group_by(*columns, &block)
359:       group(*columns, &block)
360:     end

Adds the appropriate CUBE syntax to GROUP BY.

[Source]

     # File lib/sequel/dataset/query.rb, line 399
399:     def group_cube
400:       raise Error, "GROUP BY CUBE not supported on #{db.database_type}" unless supports_group_cube?
401:       clone(:group_options=>:cube)
402:     end

Adds the appropriate ROLLUP syntax to GROUP BY.

[Source]

     # File lib/sequel/dataset/query.rb, line 405
405:     def group_rollup
406:       raise Error, "GROUP BY ROLLUP not supported on #{db.database_type}" unless supports_group_rollup?
407:       clone(:group_options=>:rollup)
408:     end

Adds the appropriate GROUPING SETS syntax to GROUP BY.

[Source]

     # File lib/sequel/dataset/query.rb, line 411
411:     def grouping_sets
412:       raise Error, "GROUP BY GROUPING SETS not supported on #{db.database_type}" unless supports_grouping_sets?
413:       clone(:group_options=>"grouping sets""grouping sets")
414:     end

Returns a copy of the dataset with the HAVING conditions changed. See where for argument types.

  DB[:items].group(:sum).having(sum: 10)
  # SELECT * FROM items GROUP BY sum HAVING (sum = 10)

[Source]

     # File lib/sequel/dataset/query.rb, line 420
420:     def having(*cond, &block)
421:       add_filter(:having, cond, &block)
422:     end

Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:

:alias :Use the given value as the from_self alias
:all :Set to true to use INTERSECT ALL instead of INTERSECT, so duplicate rows can occur
:from_self :Set to false to not wrap the returned dataset in a from_self, use with care.
  DB[:items].intersect(DB[:other_items])
  # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS t1

  DB[:items].intersect(DB[:other_items], all: true, from_self: false)
  # SELECT * FROM items INTERSECT ALL SELECT * FROM other_items

  DB[:items].intersect(DB[:other_items], alias: :i)
  # SELECT * FROM (SELECT * FROM items INTERSECT SELECT * FROM other_items) AS i

[Source]

     # File lib/sequel/dataset/query.rb, line 441
441:     def intersect(dataset, opts=OPTS)
442:       raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except?
443:       raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all?
444:       compound_clone(:intersect, dataset, opts)
445:     end

Inverts the current WHERE and HAVING clauses. If there is neither a WHERE or HAVING clause, adds a WHERE clause that is always false.

  DB[:items].where(category: 'software').invert
  # SELECT * FROM items WHERE (category != 'software')

  DB[:items].where(category: 'software', id: 3).invert
  # SELECT * FROM items WHERE ((category != 'software') OR (id != 3))

See documentation for exclude for how inversion is handled in regards to SQL 3-valued boolean logic.

[Source]

     # File lib/sequel/dataset/query.rb, line 458
458:     def invert
459:       cached_dataset(:_invert_ds) do
460:         having, where = @opts.values_at(:having, :where)
461:         if having.nil? && where.nil?
462:           where(false)
463:         else
464:           o = {}
465:           o[:having] = SQL::BooleanExpression.invert(having) if having
466:           o[:where] = SQL::BooleanExpression.invert(where) if where
467:           clone(o)
468:         end
469:       end
470:     end

Alias of inner_join

[Source]

     # File lib/sequel/dataset/query.rb, line 473
473:     def join(*args, &block)
474:       inner_join(*args, &block)
475:     end

Returns a joined dataset. Not usually called directly, users should use the appropriate join method (e.g. join, left_join, natural_join, cross_join) which fills in the type argument.

Takes the following arguments:

type :The type of join to do (e.g. :inner)
table :table to join into the current dataset. Generally one of the following types:
String, Symbol :identifier used as table or view name
Dataset :a subselect is performed with an alias of tN for some value of N
SQL::Function :set returning function
SQL::AliasedExpression :already aliased expression. Uses given alias unless overridden by the :table_alias option.
expr :conditions used when joining, depends on type:
Hash, Array of pairs :Assumes key (1st arg) is column of joined table (unless already qualified), and value (2nd arg) is column of the last joined or primary table (or the :implicit_qualifier option). To specify multiple conditions on a single joined table column, you must use an array. Uses a JOIN with an ON clause.
Array :If all members of the array are symbols, considers them as columns and uses a JOIN with a USING clause. Most databases will remove duplicate columns from the result set if this is used.
nil :If a block is not given, doesn‘t use ON or USING, so the JOIN should be a NATURAL or CROSS join. If a block is given, uses an ON clause based on the block, see below.
otherwise :Treats the argument as a filter expression, so strings are considered literal, symbols specify boolean columns, and Sequel expressions can be used. Uses a JOIN with an ON clause.
options :a hash of options, with the following keys supported:
:table_alias :Override the table alias used when joining. In general you shouldn‘t use this option, you should provide the appropriate SQL::AliasedExpression as the table argument.
:implicit_qualifier :The name to use for qualifying implicit conditions. By default, the last joined or primary table is used.
:reset_implicit_qualifier :Can set to false to ignore this join when future joins determine qualifier for implicit conditions.
:qualify :Can be set to false to not do any implicit qualification. Can be set to :deep to use the Qualifier AST Transformer, which will attempt to qualify subexpressions of the expression tree. Can be set to :symbol to only qualify symbols. Defaults to the value of default_join_table_qualification.
block :The block argument should only be given if a JOIN with an ON clause is used, in which case it yields the table alias/name for the table currently being joined, the table alias/name for the last joined (or first table), and an array of previous SQL::JoinClause. Unlike where, this block is not treated as a virtual row block.

Examples:

  DB[:a].join_table(:cross, :b)
  # SELECT * FROM a CROSS JOIN b

  DB[:a].join_table(:inner, DB[:b], c: d)
  # SELECT * FROM a INNER JOIN (SELECT * FROM b) AS t1 ON (t1.c = a.d)

  DB[:a].join_table(:left, Sequel[:b].as(:c), [:d])
  # SELECT * FROM a LEFT JOIN b AS c USING (d)

  DB[:a].natural_join(:b).join_table(:inner, :c) do |ta, jta, js|
    (Sequel.qualify(ta, :d) > Sequel.qualify(jta, :e)) & {Sequel.qualify(ta, :f)=>DB.from(js.first.table).select(:g)}
  end
  # SELECT * FROM a NATURAL JOIN b INNER JOIN c
  #   ON ((c.d > b.e) AND (c.f IN (SELECT g FROM b)))

[Source]

     # File lib/sequel/dataset/query.rb, line 536
536:     def join_table(type, table, expr=nil, options=OPTS, &block)
537:       if hoist_cte?(table)
538:         s, ds = hoist_cte(table)
539:         return s.join_table(type, ds, expr, options, &block)
540:       end
541: 
542:       using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)}
543:       if using_join && !supports_join_using?
544:         h = {}
545:         expr.each{|e| h[e] = e}
546:         return join_table(type, table, h, options)
547:       end
548: 
549:       table_alias = options[:table_alias]
550: 
551:       if table.is_a?(SQL::AliasedExpression)
552:         table_expr = if table_alias
553:           SQL::AliasedExpression.new(table.expression, table_alias, table.columns)
554:         else
555:           table
556:         end
557:         table = table_expr.expression
558:         table_name = table_alias = table_expr.alias
559:       elsif table.is_a?(Dataset)
560:         if table_alias.nil?
561:           table_alias_num = (@opts[:num_dataset_sources] || 0) + 1
562:           table_alias = dataset_alias(table_alias_num)
563:         end
564:         table_name = table_alias
565:         table_expr = SQL::AliasedExpression.new(table, table_alias)
566:       else
567:         table, implicit_table_alias = split_alias(table)
568:         table_alias ||= implicit_table_alias
569:         table_name = table_alias || table
570:         table_expr = table_alias ? SQL::AliasedExpression.new(table, table_alias) : table
571:       end
572: 
573:       join = if expr.nil? and !block
574:         SQL::JoinClause.new(type, table_expr)
575:       elsif using_join
576:         raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block
577:         SQL::JoinUsingClause.new(expr, type, table_expr)
578:       else
579:         last_alias = options[:implicit_qualifier] || @opts[:last_joined_table] || first_source_alias
580:         qualify_type = options[:qualify]
581:         if Sequel.condition_specifier?(expr)
582:           expr = expr.map do |k, v|
583:             qualify_type = default_join_table_qualification if qualify_type.nil?
584:             case qualify_type
585:             when false
586:               nil # Do no qualification
587:             when :deep
588:               k = Sequel::Qualifier.new(table_name).transform(k)
589:               v = Sequel::Qualifier.new(last_alias).transform(v)
590:             else
591:               k = qualified_column_name(k, table_name) if k.is_a?(Symbol)
592:               v = qualified_column_name(v, last_alias) if v.is_a?(Symbol)
593:             end
594:             [k,v]
595:           end
596:           expr = SQL::BooleanExpression.from_value_pairs(expr)
597:         end
598:         if block
599:           expr2 = yield(table_name, last_alias, @opts[:join] || EMPTY_ARRAY)
600:           expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2
601:         end
602:         SQL::JoinOnClause.new(expr, type, table_expr)
603:       end
604: 
605:       opts = {:join => ((@opts[:join] || EMPTY_ARRAY) + [join]).freeze}
606:       opts[:last_joined_table] = table_name unless options[:reset_implicit_qualifier] == false
607:       opts[:num_dataset_sources] = table_alias_num if table_alias_num
608:       clone(opts)
609:     end

Marks this dataset as a lateral dataset. If used in another dataset‘s FROM or JOIN clauses, it will surround the subquery with LATERAL to enable it to deal with previous tables in the query:

  DB.from(:a, DB[:b].where(Sequel[:a][:c]=>Sequel[:b][:d]).lateral)
  # SELECT * FROM a, LATERAL (SELECT * FROM b WHERE (a.c = b.d))

[Source]

     # File lib/sequel/dataset/query.rb, line 631
631:     def lateral
632:       cached_dataset(:_lateral_ds){clone(:lateral=>true)}
633:     end

If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset. To use an offset without a limit, pass nil as the first argument.

  DB[:items].limit(10) # SELECT * FROM items LIMIT 10
  DB[:items].limit(10, 20) # SELECT * FROM items LIMIT 10 OFFSET 20
  DB[:items].limit(10...20) # SELECT * FROM items LIMIT 10 OFFSET 10
  DB[:items].limit(10..20) # SELECT * FROM items LIMIT 11 OFFSET 10
  DB[:items].limit(nil, 20) # SELECT * FROM items OFFSET 20

[Source]

     # File lib/sequel/dataset/query.rb, line 645
645:     def limit(l, o = (no_offset = true; nil))
646:       return from_self.limit(l, o) if @opts[:sql]
647: 
648:       if l.is_a?(Range)
649:         no_offset = false
650:         o = l.first
651:         l = l.last - l.first + (l.exclude_end? ? 0 : 1)
652:       end
653:       l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString)
654:       if l.is_a?(Integer)
655:         raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1
656:       end
657: 
658:       ds = clone(:limit=>l)
659:       ds = ds.offset(o) unless no_offset
660:       ds
661:     end

Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. You should never pass a string to this method that is derived from user input, as that can lead to SQL injection.

A symbol may be used for database independent locking behavior, but all supported symbols have separate methods (e.g. for_update).

  DB[:items].lock_style('FOR SHARE NOWAIT')
  # SELECT * FROM items FOR SHARE NOWAIT
  DB[:items].lock_style('FOR UPDATE OF table1 SKIP LOCKED')
  # SELECT * FROM items FOR UPDATE OF table1 SKIP LOCKED

[Source]

     # File lib/sequel/dataset/query.rb, line 675
675:     def lock_style(style)
676:       clone(:lock => style)
677:     end

Returns a cloned dataset without a row_proc.

  ds = DB[:items].with_row_proc(:invert.to_proc)
  ds.all # => [{2=>:id}]
  ds.naked.all # => [{:id=>2}]

[Source]

     # File lib/sequel/dataset/query.rb, line 684
684:     def naked
685:       cached_dataset(:_naked_ds){with_row_proc(nil)}
686:     end

Returns a copy of the dataset that will raise a DatabaseLockTimeout instead of waiting for rows that are locked by another transaction

  DB[:items].for_update.nowait
  # SELECT * FROM items FOR UPDATE NOWAIT

[Source]

     # File lib/sequel/dataset/query.rb, line 693
693:     def nowait
694:       cached_dataset(:_nowait_ds) do
695:         raise(Error, 'This dataset does not support raises errors instead of waiting for locked rows') unless supports_nowait?
696:         clone(:nowait=>true)
697:       end
698:     end

Returns a copy of the dataset with a specified order. Can be safely combined with limit. If you call limit with an offset, it will override override the offset if you‘ve called offset first.

  DB[:items].offset(10) # SELECT * FROM items OFFSET 10

[Source]

     # File lib/sequel/dataset/query.rb, line 705
705:     def offset(o)
706:       o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString)
707:       if o.is_a?(Integer)
708:         raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0
709:       end
710:       clone(:offset => o)
711:     end

Adds an alternate filter to an existing WHERE clause using OR. If there is no WHERE clause, then the default is WHERE true, and OR would be redundant, so return the dataset in that case.

  DB[:items].where(:a).or(:b) # SELECT * FROM items WHERE a OR b
  DB[:items].or(:b) # SELECT * FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 719
719:     def or(*cond, &block)
720:       if @opts[:where].nil?
721:         self
722:       else
723:         add_filter(:where, cond, false, :OR, &block)
724:       end
725:     end

Returns a copy of the dataset with the order changed. If the dataset has an existing order, it is ignored and overwritten with this order. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, such as SQL functions. If a block is given, it is treated as a virtual row block, similar to where.

  DB[:items].order(:name) # SELECT * FROM items ORDER BY name
  DB[:items].order(:a, :b) # SELECT * FROM items ORDER BY a, b
  DB[:items].order(Sequel.lit('a + b')) # SELECT * FROM items ORDER BY a + b
  DB[:items].order(Sequel[:a] + :b) # SELECT * FROM items ORDER BY (a + b)
  DB[:items].order(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name DESC
  DB[:items].order(Sequel.asc(:name, :nulls=>:last)) # SELECT * FROM items ORDER BY name ASC NULLS LAST
  DB[:items].order{sum(name).desc} # SELECT * FROM items ORDER BY sum(name) DESC
  DB[:items].order(nil) # SELECT * FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 741
741:     def order(*columns, &block)
742:       virtual_row_columns(columns, block)
743:       clone(:order => (columns.compact.empty?) ? nil : columns.freeze)
744:     end

Returns a copy of the dataset with the order columns added to the end of the existing order.

  DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b
  DB[:items].order(:a).order_append(:b) # SELECT * FROM items ORDER BY a, b

[Source]

     # File lib/sequel/dataset/query.rb, line 751
751:     def order_append(*columns, &block)
752:       columns = @opts[:order] + columns if @opts[:order]
753:       order(*columns, &block)
754:     end

Alias of order

[Source]

     # File lib/sequel/dataset/query.rb, line 757
757:     def order_by(*columns, &block)
758:       order(*columns, &block)
759:     end

Alias of order_append.

[Source]

     # File lib/sequel/dataset/query.rb, line 762
762:     def order_more(*columns, &block)
763:       order_append(*columns, &block)
764:     end

Returns a copy of the dataset with the order columns added to the beginning of the existing order.

  DB[:items].order(:a).order(:b) # SELECT * FROM items ORDER BY b
  DB[:items].order(:a).order_prepend(:b) # SELECT * FROM items ORDER BY b, a

[Source]

     # File lib/sequel/dataset/query.rb, line 771
771:     def order_prepend(*columns, &block)
772:       ds = order(*columns, &block)
773:       @opts[:order] ? ds.order_append(*@opts[:order]) : ds
774:     end

Qualify to the given table, or first source if no table is given.

  DB[:items].where(id: 1).qualify
  # SELECT items.* FROM items WHERE (items.id = 1)

  DB[:items].where(id: 1).qualify(:i)
  # SELECT i.* FROM items WHERE (i.id = 1)

[Source]

     # File lib/sequel/dataset/query.rb, line 783
783:     def qualify(table=(cache=true; first_source))
784:       o = @opts
785:       return self if o[:sql]
786: 
787:       pr = proc do
788:         h = {}
789:         (o.keys & QUALIFY_KEYS).each do |k|
790:           h[k] = qualified_expression(o[k], table)
791:         end
792:         h[:select] = [SQL::ColumnAll.new(table)].freeze if !o[:select] || o[:select].empty?
793:         clone(h)
794:       end
795: 
796:       cache ? cached_dataset(:_qualify_ds, &pr) : pr.call
797:     end

Modify the RETURNING clause, only supported on a few databases. If returning is used, instead of insert returning the autogenerated primary key or update/delete returning the number of modified rows, results are returned using fetch_rows.

  DB[:items].returning # RETURNING *
  DB[:items].returning(nil) # RETURNING NULL
  DB[:items].returning(:id, :name) # RETURNING id, name

  DB[:items].returning.insert(:a=>1) do |hash|
    # hash for each row inserted, with values for all columns
  end
  DB[:items].returning.update(:a=>1) do |hash|
    # hash for each row updated, with values for all columns
  end
  DB[:items].returning.delete(:a=>1) do |hash|
    # hash for each row deleted, with values for all columns
  end

[Source]

     # File lib/sequel/dataset/query.rb, line 817
817:     def returning(*values)
818:       if values.empty?
819:         cached_dataset(:_returning_ds) do
820:           raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert)
821:           clone(:returning=>EMPTY_ARRAY)
822:         end
823:       else
824:         raise Error, "RETURNING is not supported on #{db.database_type}" unless supports_returning?(:insert)
825:         clone(:returning=>values.freeze)
826:       end
827:     end

Returns a copy of the dataset with the order reversed. If no order is given, the existing order is inverted.

  DB[:items].reverse(:id) # SELECT * FROM items ORDER BY id DESC
  DB[:items].reverse{foo(bar)} # SELECT * FROM items ORDER BY foo(bar) DESC
  DB[:items].order(:id).reverse # SELECT * FROM items ORDER BY id DESC
  DB[:items].order(:id).reverse(Sequel.desc(:name)) # SELECT * FROM items ORDER BY name ASC

[Source]

     # File lib/sequel/dataset/query.rb, line 836
836:     def reverse(*order, &block)
837:       if order.empty? && !block
838:         cached_dataset(:_reverse_ds){order(*invert_order(@opts[:order]))}
839:       else
840:         virtual_row_columns(order, block)
841:         order(*invert_order(order.empty? ? @opts[:order] : order.freeze))
842:       end
843:     end

Alias of reverse

[Source]

     # File lib/sequel/dataset/query.rb, line 846
846:     def reverse_order(*order, &block)
847:       reverse(*order, &block)
848:     end

Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to where.

  DB[:items].select(:a) # SELECT a FROM items
  DB[:items].select(:a, :b) # SELECT a, b FROM items
  DB[:items].select{[a, sum(b)]} # SELECT a, sum(b) FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 857
857:     def select(*columns, &block)
858:       virtual_row_columns(columns, block)
859:       clone(:select => columns.freeze)
860:     end

Returns a copy of the dataset selecting the wildcard if no arguments are given. If arguments are given, treat them as tables and select all columns (using the wildcard) from each table.

  DB[:items].select(:a).select_all # SELECT * FROM items
  DB[:items].select_all(:items) # SELECT items.* FROM items
  DB[:items].select_all(:items, :foo) # SELECT items.*, foo.* FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 869
869:     def select_all(*tables)
870:       if tables.empty?
871:         cached_dataset(:_select_all_ds){clone(:select => nil)}
872:       else
873:         select(*tables.map{|t| i, a = split_alias(t); a || i}.map!{|t| SQL::ColumnAll.new(t)}.freeze)
874:       end
875:     end

Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected, it will select the columns given in addition to *.

  DB[:items].select(:a).select(:b) # SELECT b FROM items
  DB[:items].select(:a).select_append(:b) # SELECT a, b FROM items
  DB[:items].select_append(:b) # SELECT *, b FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 884
884:     def select_append(*columns, &block)
885:       cur_sel = @opts[:select]
886:       if !cur_sel || cur_sel.empty?
887:         unless supports_select_all_and_column?
888:           return select_all(*(Array(@opts[:from]) + Array(@opts[:join]))).select_append(*columns, &block)
889:         end
890:         cur_sel = [WILDCARD]
891:       end
892:       select(*(cur_sel + columns), &block)
893:     end

Set both the select and group clauses with the given columns. Column aliases may be supplied, and will be included in the select clause. This also takes a virtual row block similar to where.

  DB[:items].select_group(:a, :b)
  # SELECT a, b FROM items GROUP BY a, b

  DB[:items].select_group(Sequel[:c].as(:a)){f(c2)}
  # SELECT c AS a, f(c2) FROM items GROUP BY c, f(c2)

[Source]

     # File lib/sequel/dataset/query.rb, line 904
904:     def select_group(*columns, &block)
905:       virtual_row_columns(columns, block)
906:       select(*columns).group(*columns.map{|c| unaliased_identifier(c)})
907:     end

Alias for select_append.

[Source]

     # File lib/sequel/dataset/query.rb, line 910
910:     def select_more(*columns, &block)
911:       select_append(*columns, &block)
912:     end

Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (where SELECT uses :read_only database and all other queries use the :default database). This method is always available but is only useful when database sharding is being used.

  DB[:items].all # Uses the :read_only or :default server
  DB[:items].delete # Uses the :default server
  DB[:items].server(:blah).delete # Uses the :blah server

[Source]

     # File lib/sequel/dataset/query.rb, line 923
923:     def server(servr)
924:       clone(:server=>servr)
925:     end

If the database uses sharding and the current dataset has not had a server set, return a cloned dataset that uses the given server. Otherwise, return the receiver directly instead of returning a clone.

[Source]

     # File lib/sequel/dataset/query.rb, line 930
930:     def server?(server)
931:       if db.sharded? && !opts[:server]
932:         server(server)
933:       else
934:         self
935:       end
936:     end

Specify that the check for limits/offsets when updating/deleting be skipped for the dataset.

[Source]

     # File lib/sequel/dataset/query.rb, line 939
939:     def skip_limit_check
940:       cached_dataset(:_skip_limit_check_ds) do
941:         clone(:skip_limit_check=>true)
942:       end
943:     end

Skip locked rows when returning results from this dataset.

[Source]

     # File lib/sequel/dataset/query.rb, line 946
946:     def skip_locked
947:       cached_dataset(:_skip_locked_ds) do
948:         raise(Error, 'This dataset does not support skipping locked rows') unless supports_skip_locked?
949:         clone(:skip_locked=>true)
950:       end
951:     end

Returns a copy of the dataset with no filters (HAVING or WHERE clause) applied.

  DB[:items].group(:a).having(a: 1).where(:b).unfiltered
  # SELECT * FROM items GROUP BY a

[Source]

     # File lib/sequel/dataset/query.rb, line 957
957:     def unfiltered
958:       cached_dataset(:_unfiltered_ds){clone(:where => nil, :having => nil)}
959:     end

Returns a copy of the dataset with no grouping (GROUP or HAVING clause) applied.

  DB[:items].group(:a).having(a: 1).where(:b).ungrouped
  # SELECT * FROM items WHERE b

[Source]

     # File lib/sequel/dataset/query.rb, line 965
965:     def ungrouped
966:       cached_dataset(:_ungrouped_ds){clone(:group => nil, :having => nil)}
967:     end

Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:

:alias :Use the given value as the from_self alias
:all :Set to true to use UNION ALL instead of UNION, so duplicate rows can occur
:from_self :Set to false to not wrap the returned dataset in a from_self, use with care.
  DB[:items].union(DB[:other_items])
  # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS t1

  DB[:items].union(DB[:other_items], all: true, from_self: false)
  # SELECT * FROM items UNION ALL SELECT * FROM other_items

  DB[:items].union(DB[:other_items], alias: :i)
  # SELECT * FROM (SELECT * FROM items UNION SELECT * FROM other_items) AS i

[Source]

     # File lib/sequel/dataset/query.rb, line 985
985:     def union(dataset, opts=OPTS)
986:       compound_clone(:union, dataset, opts)
987:     end

Returns a copy of the dataset with no limit or offset.

  DB[:items].limit(10, 20).unlimited # SELECT * FROM items

[Source]

     # File lib/sequel/dataset/query.rb, line 992
992:     def unlimited
993:       cached_dataset(:_unlimited_ds){clone(:limit=>nil, :offset=>nil)}
994:     end

Returns a copy of the dataset with no order.

  DB[:items].order(:a).unordered # SELECT * FROM items

[Source]

      # File lib/sequel/dataset/query.rb, line 999
 999:     def unordered
1000:       cached_dataset(:_unordered_ds){clone(:order=>nil)}
1001:     end

Returns a copy of the dataset with the given WHERE conditions imposed upon it.

Accepts the following argument types:

Hash, Array of pairs :list of equality/inclusion expressions
Symbol :taken as a boolean column argument (e.g. WHERE active)
Sequel::SQL::BooleanExpression, Sequel::LiteralString :an existing condition expression, probably created using the Sequel expression filter DSL.

where also accepts a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the "Virtual Rows" guide

If both a block and regular argument are provided, they get ANDed together.

Examples:

  DB[:items].where(id: 3)
  # SELECT * FROM items WHERE (id = 3)

  DB[:items].where(Sequel.lit('price < ?', 100))
  # SELECT * FROM items WHERE price < 100

  DB[:items].where([[:id, [1,2,3]], [:id, 0..10]])
  # SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10)))

  DB[:items].where(Sequel.lit('price < 100'))
  # SELECT * FROM items WHERE price < 100

  DB[:items].where(:active)
  # SELECT * FROM items WHERE :active

  DB[:items].where{price < 100}
  # SELECT * FROM items WHERE (price < 100)

Multiple where calls can be chained for scoping:

  software = dataset.where(category: 'software').where{price < 100}
  # SELECT * FROM items WHERE ((category = 'software') AND (price < 100))

See the "Dataset Filtering" guide for more examples and details.

[Source]

      # File lib/sequel/dataset/query.rb, line 1045
1045:     def where(*cond, &block)
1046:       add_filter(:where, cond, &block)
1047:     end

Add a common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:

:args :Specify the arguments/columns for the CTE, should be an array of symbols.
:recursive :Specify that this is a recursive CTE
  DB[:items].with(:items, DB[:syx].where(Sequel[:name].like('A%')))
  # WITH items AS (SELECT * FROM syx WHERE (name LIKE 'A%' ESCAPE '\')) SELECT * FROM items

[Source]

      # File lib/sequel/dataset/query.rb, line 1057
1057:     def with(name, dataset, opts=OPTS)
1058:       raise(Error, 'This dataset does not support common table expressions') unless supports_cte?
1059:       if hoist_cte?(dataset)
1060:         s, ds = hoist_cte(dataset)
1061:         s.with(name, ds, opts)
1062:       else
1063:         clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:name=>name, :dataset=>dataset)]).freeze)
1064:       end
1065:     end

Return a clone of the dataset extended with the given modules. Note that like Object#extend, when multiple modules are provided as arguments the cloned dataset is extended with the modules in reverse order. If a block is provided, a DatasetModule is created using the block and the clone is extended with that module after any modules given as arguments.

[Source]

      # File lib/sequel/dataset/query.rb, line 1102
1102:       def with_extend(*mods, &block)
1103:         c = _clone(:freeze=>false)
1104:         c.extend(*mods) unless mods.empty?
1105:         c.extend(DatasetModule.new(&block)) if block
1106:         c.freeze
1107:       end

Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:

:args :Specify the arguments/columns for the CTE, should be an array of symbols.
:union_all :Set to false to use UNION instead of UNION ALL combining the nonrecursive and recursive parts.
  DB[:t].with_recursive(:t,
    DB[:i1].select(:id, :parent_id).where(parent_id: nil),
    DB[:i1].join(:t, id: :parent_id).select(Sequel[:i1][:id], Sequel[:i1][:parent_id]),
    :args=>[:id, :parent_id])

  # WITH RECURSIVE t(id, parent_id) AS (
  #   SELECT id, parent_id FROM i1 WHERE (parent_id IS NULL)
  #   UNION ALL
  #   SELECT i1.id, i1.parent_id FROM i1 INNER JOIN t ON (t.id = i1.parent_id)
  # ) SELECT * FROM t

[Source]

      # File lib/sequel/dataset/query.rb, line 1083
1083:     def with_recursive(name, nonrecursive, recursive, opts=OPTS)
1084:       raise(Error, 'This datatset does not support common table expressions') unless supports_cte?
1085:       if hoist_cte?(nonrecursive)
1086:         s, ds = hoist_cte(nonrecursive)
1087:         s.with_recursive(name, ds, recursive, opts)
1088:       elsif hoist_cte?(recursive)
1089:         s, ds = hoist_cte(recursive)
1090:         s.with_recursive(name, nonrecursive, ds, opts)
1091:       else
1092:         clone(:with=>((@opts[:with]||EMPTY_ARRAY) + [Hash[opts].merge!(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]).freeze)
1093:       end
1094:     end

Returns a cloned dataset with the given row_proc.

  ds = DB[:items]
  ds.all # => [{:id=>2}]
  ds.with_row_proc(:invert.to_proc).all # => [{2=>:id}]

[Source]

      # File lib/sequel/dataset/query.rb, line 1124
1124:     def with_row_proc(callable)
1125:       clone(:row_proc=>callable)
1126:     end

Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.

  DB[:items].with_sql('SELECT * FROM foo') # SELECT * FROM foo

You can use placeholders in your SQL and provide arguments for those placeholders:

  DB[:items].with_sql('SELECT ? FROM foo', 1) # SELECT 1 FROM foo

You can also provide a method name and arguments to call to get the SQL:

  DB[:items].with_sql(:insert_sql, :b=>1) # INSERT INTO items (b) VALUES (1)

Note that datasets that specify custom SQL using this method will generally ignore future dataset methods that modify the SQL used, as specifying custom SQL overrides Sequel‘s SQL generator. You should probably limit yourself to the following dataset methods when using this method, or use the implicit_subquery extension:

[Source]

      # File lib/sequel/dataset/query.rb, line 1158
1158:     def with_sql(sql, *args)
1159:       if sql.is_a?(Symbol)
1160:         sql = public_send(sql, *args)
1161:       else
1162:         sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty?
1163:       end
1164:       clone(:sql=>sql)
1165:     end

Protected Instance methods

Add the dataset to the list of compounds

[Source]

      # File lib/sequel/dataset/query.rb, line 1170
1170:     def compound_clone(type, dataset, opts)
1171:       if dataset.is_a?(Dataset) && dataset.opts[:with] && !supports_cte_in_compounds?
1172:         s, ds = hoist_cte(dataset)
1173:         return s.compound_clone(type, ds, opts)
1174:       end
1175:       ds = compound_from_self.clone(:compounds=>(Array(@opts[:compounds]).map(&:dup) + [[type, dataset.compound_from_self, opts[:all]].freeze]).freeze)
1176:       opts[:from_self] == false ? ds : ds.from_self(opts)
1177:     end

Return true if the dataset has a non-nil value for any key in opts.

[Source]

      # File lib/sequel/dataset/query.rb, line 1180
1180:     def options_overlap(opts)
1181:       !(@opts.map{|k,v| k unless v.nil?}.compact & opts).empty?
1182:     end

Whether this dataset is a simple select from an underlying table, such as:

  SELECT * FROM table
  SELECT table.* FROM table

[Source]

      # File lib/sequel/dataset/query.rb, line 1191
1191:     def simple_select_all?
1192:       return false unless (f = @opts[:from]) && f.length == 1
1193:       o = @opts.reject{|k,v| v.nil? || non_sql_option?(k)}
1194:       from = f.first
1195:       from = from.expression if from.is_a?(SQL::AliasedExpression)
1196: 
1197:       if SIMPLE_SELECT_ALL_ALLOWED_FROM.any?{|x| from.is_a?(x)}
1198:         case o.length
1199:         when 1
1200:           true
1201:         when 2
1202:           (s = o[:select]) && s.length == 1 && s.first.is_a?(SQL::ColumnAll)
1203:         else
1204:           false
1205:         end
1206:       else
1207:         false
1208:       end
1209:     end

4 - Methods that describe what the dataset supports

These methods all return booleans, with most describing whether or not the dataset supports a feature.

Public Instance methods

Whether this dataset will provide accurate number of rows matched for delete and update statements, true by default. Accurate in this case is the number of rows matched by the dataset‘s filter.

[Source]

    # File lib/sequel/dataset/features.rb, line 19
19:     def provides_accurate_rows_matched?
20:       true
21:     end

Whether this dataset quotes identifiers.

[Source]

    # File lib/sequel/dataset/features.rb, line 12
12:     def quote_identifiers?
13:       @opts.fetch(:quote_identifiers, true)
14:     end

Whether you must use a column alias list for recursive CTEs, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 24
24:     def recursive_cte_requires_column_aliases?
25:       false
26:     end

Whether type specifiers are required for prepared statement/bound variable argument placeholders (i.e. :bv__integer), false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 36
36:     def requires_placeholder_type_specifiers?
37:       false
38:     end

Whether the dataset requires SQL standard datetimes. False by default, as most allow strings with ISO 8601 format.

[Source]

    # File lib/sequel/dataset/features.rb, line 30
30:     def requires_sql_standard_datetimes?
31:       false
32:     end

Whether the dataset supports common table expressions, false by default. If given, type can be :select, :insert, :update, or :delete, in which case it determines whether WITH is supported for the respective statement type.

[Source]

    # File lib/sequel/dataset/features.rb, line 43
43:     def supports_cte?(type=:select)
44:       false
45:     end

Whether the dataset supports common table expressions in subqueries, false by default. If false, applies the WITH clause to the main query, which can cause issues if multiple WITH clauses use the same name.

[Source]

    # File lib/sequel/dataset/features.rb, line 50
50:     def supports_cte_in_subqueries?
51:       false
52:     end

Whether the database supports derived column lists (e.g. "table_expr AS table_alias(column_alias1, column_alias2, …)"), true by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 57
57:     def supports_derived_column_lists?
58:       true
59:     end

Whether the dataset supports or can emulate the DISTINCT ON clause, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 62
62:     def supports_distinct_on?
63:       false
64:     end

Whether the dataset supports CUBE with GROUP BY, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 67
67:     def supports_group_cube?
68:       false
69:     end

Whether the dataset supports ROLLUP with GROUP BY, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 72
72:     def supports_group_rollup?
73:       false
74:     end

Whether the dataset supports GROUPING SETS with GROUP BY, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 77
77:     def supports_grouping_sets?
78:       false
79:     end

Whether this dataset supports the insert_select method for returning all columns values directly from an insert query, false by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 83
83:     def supports_insert_select?
84:       supports_returning?(:insert)
85:     end

Whether the dataset supports the INTERSECT and EXCEPT compound operations, true by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 88
88:     def supports_intersect_except?
89:       true
90:     end

Whether the dataset supports the INTERSECT ALL and EXCEPT ALL compound operations, true by default.

[Source]

    # File lib/sequel/dataset/features.rb, line 93
93:     def supports_intersect_except_all?
94:       true
95:     end

Whether the dataset supports the IS TRUE syntax, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 98
 98:     def supports_is_true?
 99:       true
100:     end

Whether the dataset supports the JOIN table USING (column1, …) syntax, true by default. If false, support is emulated using JOIN table ON (table.column1 = other_table.column1).

[Source]

     # File lib/sequel/dataset/features.rb, line 104
104:     def supports_join_using?
105:       true
106:     end

Whether the dataset supports LATERAL for subqueries in the FROM or JOIN clauses, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 109
109:     def supports_lateral_subqueries?
110:       false
111:     end

Whether limits are supported in correlated subqueries, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 114
114:     def supports_limits_in_correlated_subqueries?
115:       true
116:     end

Whether modifying joined datasets is supported, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 124
124:     def supports_modifying_joins?
125:       false
126:     end

Whether the IN/NOT IN operators support multiple columns when an array of values is given, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 130
130:     def supports_multiple_column_in?
131:       true
132:     end

Whether the dataset supports skipping raising an error instead of waiting for locked rows when returning data, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 119
119:     def supports_nowait?
120:       false
121:     end

Whether offsets are supported in correlated subqueries, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 135
135:     def supports_offsets_in_correlated_subqueries?
136:       true
137:     end

Whether the dataset supports or can fully emulate the DISTINCT ON clause, including respecting the ORDER BY clause, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 141
141:     def supports_ordered_distinct_on?
142:       supports_distinct_on?
143:     end

Whether the dataset supports pattern matching by regular expressions, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 146
146:     def supports_regexp?
147:       false
148:     end

Whether the dataset supports REPLACE syntax, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 151
151:     def supports_replace?
152:       false
153:     end

Whether the RETURNING clause is supported for the given type of query, false by default. type can be :insert, :update, or :delete.

[Source]

     # File lib/sequel/dataset/features.rb, line 157
157:     def supports_returning?(type)
158:       false
159:     end

Whether the database supports SELECT *, column FROM table, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 167
167:     def supports_select_all_and_column?
168:       true
169:     end

Whether the dataset supports skipping locked rows when returning data, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 162
162:     def supports_skip_locked?
163:       false
164:     end

Whether the dataset supports timezones in literal timestamps, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 172
172:     def supports_timestamp_timezones?
173:       false
174:     end

Whether the dataset supports fractional seconds in literal timestamps, true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 177
177:     def supports_timestamp_usecs?
178:       true
179:     end

Whether the dataset supports WHERE TRUE (or WHERE 1 for databases that that use 1 for true), true by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 188
188:     def supports_where_true?
189:       true
190:     end

Whether the dataset supports window functions, false by default.

[Source]

     # File lib/sequel/dataset/features.rb, line 182
182:     def supports_window_functions?
183:       false
184:     end