File Manager

Current Path : /usr/lib/ruby/gems/3.0.0/gems/rbs-1.0.4/stdlib/csv/0/
Upload File :
Current File : //usr/lib/ruby/gems/3.0.0/gems/rbs-1.0.4/stdlib/csv/0/csv.rbs

# This class provides a complete interface to CSV files and data. It offers
# tools to enable you to read and write to and from Strings or IO objects, as
# needed.
#
# The most generic interface of the library is:
#
#     csv = CSV.new(string_or_io, **options)
#
#     # Reading: IO object should be open for read
#     csv.read # => array of rows
#     # or
#     csv.each do |row|
#       # ...
#     end
#     # or
#     row = csv.shift
#
#     # Writing: IO object should be open for write
#     csv << row
#
# There are several specialized class methods for one-statement reading or
# writing, described in the Specialized Methods section.
#
# If a String is passed into ::new, it is internally wrapped into a StringIO
# object.
#
# `options` can be used for specifying the particular CSV flavor (column
# separators, row separators, value quoting and so on), and for data conversion,
# see Data Conversion section for the description of the latter.
#
# ## Specialized Methods
#
# ### Reading
#
#     # From a file: all at once
#     arr_of_rows = CSV.read("path/to/file.csv", **options)
#     # iterator-style:
#     CSV.foreach("path/to/file.csv", **options) do |row|
#       # ...
#     end
#
#     # From a string
#     arr_of_rows = CSV.parse("CSV,data,String", **options)
#     # or
#     CSV.parse("CSV,data,String", **options) do |row|
#       # ...
#     end
#
# ### Writing
#
#     # To a file
#     CSV.open("path/to/file.csv", "wb") do |csv|
#       csv << ["row", "of", "CSV", "data"]
#       csv << ["another", "row"]
#       # ...
#     end
#
#     # To a String
#     csv_string = CSV.generate do |csv|
#       csv << ["row", "of", "CSV", "data"]
#       csv << ["another", "row"]
#       # ...
#     end
#
# ### Shortcuts
#
#     # Core extensions for converting one line
#     csv_string = ["CSV", "data"].to_csv   # to CSV
#     csv_array  = "CSV,String".parse_csv   # from CSV
#
#     # CSV() method
#     CSV             { |csv_out| csv_out << %w{my data here} }  # to $stdout
#     CSV(csv = "")   { |csv_str| csv_str << %w{my data here} }  # to a String
#     CSV($stderr)    { |csv_err| csv_err << %w{my data here} }  # to $stderr
#     CSV($stdin)     { |csv_in|  csv_in.each { |row| p row } }  # from $stdin
#
# ## Data Conversion
#
# ### CSV with headers
#
# CSV allows to specify column names of CSV file, whether they are in data, or
# provided separately. If headers are specified, reading methods return an
# instance of CSV::Table, consisting of CSV::Row.
#
#     # Headers are part of data
#     data = CSV.parse(<<~ROWS, headers: true)
#       Name,Department,Salary
#       Bob,Engineering,1000
#       Jane,Sales,2000
#       John,Management,5000
#     ROWS
#
#     data.class      #=> CSV::Table
#     data.first      #=> #<CSV::Row "Name":"Bob" "Department":"Engineering" "Salary":"1000">
#     data.first.to_h #=> {"Name"=>"Bob", "Department"=>"Engineering", "Salary"=>"1000"}
#
#     # Headers provided by developer
#     data = CSV.parse('Bob,Engineering,1000', headers: %i[name department salary])
#     data.first      #=> #<CSV::Row name:"Bob" department:"Engineering" salary:"1000">
#
# ### Typed data reading
#
# CSV allows to provide a set of data *converters* e.g. transformations to try
# on input data. Converter could be a symbol from CSV::Converters constant's
# keys, or lambda.
#
#     # Without any converters:
#     CSV.parse('Bob,2018-03-01,100')
#     #=> [["Bob", "2018-03-01", "100"]]
#
#     # With built-in converters:
#     CSV.parse('Bob,2018-03-01,100', converters: %i[numeric date])
#     #=> [["Bob", #<Date: 2018-03-01>, 100]]
#
#     # With custom converters:
#     CSV.parse('Bob,2018-03-01,100', converters: [->(v) { Time.parse(v) rescue v }])
#     #=> [["Bob", 2018-03-01 00:00:00 +0200, "100"]]
#
# ## CSV and Character Encodings (M17n or Multilingualization)
#
# This new CSV parser is m17n savvy.  The parser works in the Encoding of the IO
# or String object being read from or written to. Your data is never transcoded
# (unless you ask Ruby to transcode it for you) and will literally be parsed in
# the Encoding it is in. Thus CSV will return Arrays or Rows of Strings in the
# Encoding of your data. This is accomplished by transcoding the parser itself
# into your Encoding.
#
# Some transcoding must take place, of course, to accomplish this multiencoding
# support. For example, `:col_sep`, `:row_sep`, and `:quote_char` must be
# transcoded to match your data.  Hopefully this makes the entire process feel
# transparent, since CSV's defaults should just magically work for your data.
# However, you can set these values manually in the target Encoding to avoid the
# translation.
#
# It's also important to note that while all of CSV's core parser is now
# Encoding agnostic, some features are not. For example, the built-in converters
# will try to transcode data to UTF-8 before making conversions. Again, you can
# provide custom converters that are aware of your Encodings to avoid this
# translation. It's just too hard for me to support native conversions in all of
# Ruby's Encodings.
#
# Anyway, the practical side of this is simple: make sure IO and String objects
# passed into CSV have the proper Encoding set and everything should just work.
# CSV methods that allow you to open IO objects (CSV::foreach(), CSV::open(),
# CSV::read(), and CSV::readlines()) do allow you to specify the Encoding.
#
# One minor exception comes when generating CSV into a String with an Encoding
# that is not ASCII compatible. There's no existing data for CSV to use to
# prepare itself and thus you will probably need to manually specify the desired
# Encoding for most of those cases. It will try to guess using the fields in a
# row of output though, when using CSV::generate_line() or Array#to_csv().
#
# I try to point out any other Encoding issues in the documentation of methods
# as they come up.
#
# This has been tested to the best of my ability with all non-"dummy" Encodings
# Ruby ships with. However, it is brave new code and may have some bugs. Please
# feel free to [report](mailto:james@grayproductions.net) any issues you find
# with it.
#
class CSV < Object
  include Enumerable[untyped]
  extend Forwardable

  # This method is intended as the primary interface for reading CSV files. You
  # pass a `path` and any `options` you wish to set for the read. Each row of file
  # will be passed to the provided `block` in turn.
  #
  # The `options` parameter can be anything CSV::new() understands. This method
  # also understands an additional `:encoding` parameter that you can use to
  # specify the Encoding of the data in the file to be read. You must provide this
  # unless your data is in Encoding::default_external(). CSV will use this to
  # determine how to parse the data. You may provide a second Encoding to have the
  # data transcoded as it is read. For example, `encoding: "UTF-32BE:UTF-8"` would
  # read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses
  # it.
  #
  def self.foreach: [U] (String | IO | StringIO path, ?::Hash[Symbol, U] options) { (::Array[String?] arg0) -> void } -> void

  # This constructor will wrap either a String or IO object passed in `data` for
  # reading and/or writing. In addition to the CSV instance methods, several IO
  # methods are delegated. (See CSV::open() for a complete list.) If you pass a
  # String for `data`, you can later retrieve it (after writing to it, for
  # example) with CSV.string().
  #
  # Note that a wrapped String will be positioned at the beginning (for reading).
  # If you want it at the end (for writing), use CSV::generate(). If you want any
  # other positioning, pass a preset StringIO object instead.
  #
  # You may set any reading and/or writing preferences in the `options` Hash.
  # Available options are:
  #
  # **`:col_sep`**
  # :   The String placed between each field. This String will be transcoded into
  #     the data's Encoding before parsing.
  # **`:row_sep`**
  # :   The String appended to the end of each row. This can be set to the special
  #     `:auto` setting, which requests that CSV automatically discover this from
  #     the data. Auto-discovery reads ahead in the data looking for the next
  #     `"\r\n"`, `"\n"`, or `"\r"` sequence. A sequence will be selected even if
  #     it occurs in a quoted field, assuming that you would have the same line
  #     endings there. If none of those sequences is found, `data` is `ARGF`,
  #     `STDIN`, `STDOUT`, or `STDERR`, or the stream is only available for
  #     output, the default `$INPUT_RECORD_SEPARATOR` (`$/`) is used. Obviously,
  #     discovery takes a little time. Set manually if speed is important. Also
  #     note that IO objects should be opened in binary mode on Windows if this
  #     feature will be used as the line-ending translation can cause problems
  #     with resetting the document position to where it was before the read
  #     ahead. This String will be transcoded into the data's Encoding before
  #     parsing.
  # **`:quote_char`**
  # :   The character used to quote fields. This has to be a single character
  #     String. This is useful for application that incorrectly use `'` as the
  #     quote character instead of the correct `"`. CSV will always consider a
  #     double sequence of this character to be an escaped quote. This String will
  #     be transcoded into the data's Encoding before parsing.
  # **`:field_size_limit`**
  # :   This is a maximum size CSV will read ahead looking for the closing quote
  #     for a field. (In truth, it reads to the first line ending beyond this
  #     size.) If a quote cannot be found within the limit CSV will raise a
  #     MalformedCSVError, assuming the data is faulty. You can use this limit to
  #     prevent what are effectively DoS attacks on the parser. However, this
  #     limit can cause a legitimate parse to fail and thus is set to `nil`, or
  #     off, by default.
  # **`:converters`**
  # :   An Array of names from the Converters Hash and/or lambdas that handle
  #     custom conversion. A single converter doesn't have to be in an Array. All
  #     built-in converters try to transcode fields to UTF-8 before converting.
  #     The conversion will fail if the data cannot be transcoded, leaving the
  #     field unchanged.
  # **`:unconverted_fields`**
  # :   If set to `true`, an unconverted_fields() method will be added to all
  #     returned rows (Array or CSV::Row) that will return the fields as they were
  #     before conversion. Note that `:headers` supplied by Array or String were
  #     not fields of the document and thus will have an empty Array attached.
  # **`:headers`**
  # :   If set to `:first_row` or `true`, the initial row of the CSV file will be
  #     treated as a row of headers. If set to an Array, the contents will be used
  #     as the headers. If set to a String, the String is run through a call of
  #     CSV::parse_line() with the same `:col_sep`, `:row_sep`, and `:quote_char`
  #     as this instance to produce an Array of headers. This setting causes
  #     CSV#shift() to return rows as CSV::Row objects instead of Arrays and
  #     CSV#read() to return CSV::Table objects instead of an Array of Arrays.
  # **`:return_headers`**
  # :   When `false`, header rows are silently swallowed. If set to `true`, header
  #     rows are returned in a CSV::Row object with identical headers and fields
  #     (save that the fields do not go through the converters).
  # **`:write_headers`**
  # :   When `true` and `:headers` is set, a header row will be added to the
  #     output.
  # **`:header_converters`**
  # :   Identical in functionality to `:converters` save that the conversions are
  #     only made to header rows. All built-in converters try to transcode headers
  #     to UTF-8 before converting. The conversion will fail if the data cannot be
  #     transcoded, leaving the header unchanged.
  # **`:skip_blanks`**
  # :   When setting a `true` value, CSV will skip over any empty rows. Note that
  #     this setting will not skip rows that contain column separators, even if
  #     the rows contain no actual data. If you want to skip rows that contain
  #     separators but no content, consider using `:skip_lines`, or inspecting
  #     fields.compact.empty? on each row.
  # **`:force_quotes`**
  # :   When setting a `true` value, CSV will quote all CSV fields it creates.
  # **`:skip_lines`**
  # :   When setting an object responding to `match`, every line matching it is
  #     considered a comment and ignored during parsing. When set to a String, it
  #     is first converted to a Regexp. When set to `nil` no line is considered a
  #     comment. If the passed object does not respond to `match`, `ArgumentError`
  #     is thrown.
  # **`:liberal_parsing`**
  # :   When setting a `true` value, CSV will attempt to parse input not
  #     conformant with RFC 4180, such as double quotes in unquoted fields.
  # **`:nil_value`**
  # :   When set an object, any values of an empty field is replaced by the set
  #     object, not nil.
  # **`:empty_value`**
  # :   When setting an object, any values of a blank string field is replaced by
  #     the set object.
  # **`:quote_empty`**
  # :   When setting a `true` value, CSV will quote empty values with double
  #     quotes. When `false`, CSV will emit an empty string for an empty field
  #     value.
  # **`:write_converters`**
  # :   Converts values on each line with the specified `Proc` object(s), which
  #     receive a `String` value and return a `String` or `nil` value. When an
  #     array is specified, each converter will be applied in order.
  # **`:write_nil_value`**
  # :   When a `String` value, `nil` value(s) on each line will be replaced with
  #     the specified value.
  # **`:write_empty_value`**
  # :   When a `String` or `nil` value, empty value(s) on each line will be
  #     replaced with the specified value.
  # **`:strip`**
  # :   When setting a `true` value, CSV will strip "t\r\n\f\v" around the values.
  #     If you specify a string instead of `true`, CSV will strip string. The
  #     length of the string must be 1.
  #
  #
  # See CSV::DEFAULT_OPTIONS for the default settings.
  #
  # Options cannot be overridden in the instance methods for performance reasons,
  # so be sure to set what you want here.
  def initialize: (?String | IO | StringIO io, ?::Hash[Symbol, untyped] options) -> void

  # This method can be used to easily parse CSV out of a String. You may either
  # provide a `block` which will be called with each row of the String in turn, or
  # just use the returned Array of Arrays (when no `block` is given).
  #
  # You pass your `str` to read from, and an optional `options` containing
  # anything CSV::new() understands.
  #
  def self.parse: (String str, ?::Hash[Symbol, untyped] options) ?{ (::Array[String?] arg0) -> void } -> ::Array[::Array[String?]]?

  # This method is a shortcut for converting a single line of a CSV String into an
  # Array. Note that if `line` contains multiple rows, anything beyond the first
  # row is ignored.
  #
  # The `options` parameter can be anything CSV::new() understands.
  #
  def self.parse_line: (String str, ?::Hash[Symbol, untyped] options) -> ::Array[String?]?

  # Slurps the remaining rows and returns an Array of Arrays.
  #
  # The data source must be open for reading.
  #
  def read: () -> ::Array[::Array[String?]]

  def readline: () -> ::Array[String?]?

  # Use to slurp a CSV file into an Array of Arrays. Pass the `path` to the file
  # and any `options` CSV::new() understands. This method also understands an
  # additional `:encoding` parameter that you can use to specify the Encoding of
  # the data in the file to be read. You must provide this unless your data is in
  # Encoding::default_external(). CSV will use this to determine how to parse the
  # data. You may provide a second Encoding to have the data transcoded as it is
  # read. For example, `encoding: "UTF-32BE:UTF-8"` would read UTF-32BE data from
  # the file but transcode it to UTF-8 before CSV parses it.
  #
  def self.read: (String path, ?::Hash[Symbol, untyped] options) -> ::Array[::Array[String?]]

  # The primary write method for wrapped Strings and IOs, `row` (an Array or
  # CSV::Row) is converted to CSV and appended to the data source. When a CSV::Row
  # is passed, only the row's fields() are appended to the output.
  #
  # The data source must be open for writing.
  #
  def <<: (::Array[untyped] | CSV::Row row) -> void

  # This method wraps a String you provide, or an empty default String, in a CSV
  # object which is passed to the provided block. You can use the block to append
  # CSV rows to the String and when the block exits, the final String will be
  # returned.
  #
  # Note that a passed String **is** modified by this method. Call dup() before
  # passing if you need a new String.
  #
  # The `options` parameter can be anything CSV::new() understands.  This method
  # understands an additional `:encoding` parameter when not passed a String to
  # set the base Encoding for the output.  CSV needs this hint if you plan to
  # output non-ASCII compatible data.
  #
  def self.generate: (?String str, **untyped options) { (CSV csv) -> void } -> String
end

# The options used when no overrides are given by calling code. They are:
#
# **`:col_sep`**
# :   `","`
# **`:row_sep`**
# :   `:auto`
# **`:quote_char`**
# :   `'"'`
# **`:field_size_limit`**
# :   `nil`
# **`:converters`**
# :   `nil`
# **`:unconverted_fields`**
# :   `nil`
# **`:headers`**
# :   `false`
# **`:return_headers`**
# :   `false`
# **`:header_converters`**
# :   `nil`
# **`:skip_blanks`**
# :   `false`
# **`:force_quotes`**
# :   `false`
# **`:skip_lines`**
# :   `nil`
# **`:liberal_parsing`**
# :   `false`
# **`:quote_empty`**
# :   `true`
#
#
CSV::DEFAULT_OPTIONS: ::Hash[untyped, untyped]

# The version of the installed library.
#
CSV::VERSION: String

# A CSV::Row is part Array and part Hash. It retains an order for the fields and
# allows duplicates just as an Array would, but also allows you to access fields
# by name just as you could if they were in a Hash.
#
# All rows returned by CSV will be constructed from this class, if header row
# processing is activated.
#
class CSV::Row < Object
  include Enumerable[untyped]
  extend Forwardable

  # If a two-element Array is provided, it is assumed to be a header and field and
  # the pair is appended. A Hash works the same way with the key being the header
  # and the value being the field. Anything else is assumed to be a lone field
  # which is appended with a `nil` header.
  #
  # This method returns the row for chaining.
  #
  def <<: (untyped arg) -> untyped

  # Returns `true` if this row contains the same headers and fields in the same
  # order as `other`.
  #
  def ==: (untyped other) -> bool

  alias [] field

  # Looks up the field by the semantics described in CSV::Row.field() and assigns
  # the `value`.
  #
  # Assigning past the end of the row with an index will set all pairs between to
  # `[nil, nil]`. Assigning to an unused header appends the new pair.
  #
  def []=: (*untyped args) -> untyped

  # Removes a pair from the row by `header` or `index`. The pair is located as
  # described in CSV::Row.field(). The deleted pair is returned, or `nil` if a
  # pair could not be found.
  #
  def delete: (untyped header_or_index, ?untyped minimum_index) -> untyped

  # The provided `block` is passed a header and field for each pair in the row and
  # expected to return `true` or `false`, depending on whether the pair should be
  # deleted.
  #
  # This method returns the row for chaining.
  #
  # If no block is given, an Enumerator is returned.
  #
  def delete_if: () { (*untyped) -> untyped } -> untyped

  # Extracts the nested value specified by the sequence of `index` or `header`
  # objects by calling dig at each step, returning nil if any intermediate step is
  # nil.
  #
  def dig: (untyped index_or_header, *untyped indexes) -> untyped

  # Yields each pair of the row as header and field tuples (much like iterating
  # over a Hash). This method returns the row for chaining.
  #
  # If no block is given, an Enumerator is returned.
  #
  # Support for Enumerable.
  #
  def each: () { (*untyped) -> untyped } -> untyped

  alias each_pair each

  def empty?: (*untyped args) { (*untyped) -> untyped } -> bool

  # This method will fetch the field value by `header`. It has the same behavior
  # as Hash#fetch: if there is a field with the given `header`, its value is
  # returned. Otherwise, if a block is given, it is yielded the `header` and its
  # result is returned; if a `default` is given as the second argument, it is
  # returned; otherwise a KeyError is raised.
  #
  def fetch: (untyped header, *untyped varargs) ?{ (*untyped) -> untyped } -> untyped

  # This method will return the field value by `header` or `index`. If a field is
  # not found, `nil` is returned.
  #
  # When provided, `offset` ensures that a header match occurs on or later than
  # the `offset` index. You can use this to find duplicate headers, without
  # resorting to hard-coding exact indices.
  #
  def field: (untyped header_or_index, ?untyped minimum_index) -> untyped

  # Returns `true` if `data` matches a field in this row, and `false` otherwise.
  #
  def field?: (untyped data) -> bool

  # Returns `true` if this is a field row.
  #
  def field_row?: () -> bool

  # This method accepts any number of arguments which can be headers, indices,
  # Ranges of either, or two-element Arrays containing a header and offset. Each
  # argument will be replaced with a field lookup as described in
  # CSV::Row.field().
  #
  # If called with no arguments, all fields are returned.
  #
  def fields: (*untyped headers_and_or_indices) -> untyped

  # Returns `true` if there is a field with the given `header`.
  #
  def has_key?: (untyped header) -> bool

  alias header? has_key?

  # Returns `true` if this is a header row.
  #
  def header_row?: () -> bool

  # Returns the headers of this row.
  #
  def headers: () -> untyped

  alias include? has_key?

  # This method will return the index of a field with the provided `header`. The
  # `offset` can be used to locate duplicate header names, as described in
  # CSV::Row.field().
  #
  def index: (untyped header, ?untyped minimum_index) -> untyped

  # A summary of fields, by header, in an ASCII compatible String.
  #
  def inspect: () -> String

  alias key? has_key?

  def length: (*untyped args) { (*untyped) -> untyped } -> untyped

  alias member? has_key?

  # A shortcut for appending multiple fields. Equivalent to:
  #
  #     args.each { |arg| csv_row << arg }
  #
  # This method returns the row for chaining.
  #
  def push: (*untyped args) -> untyped

  def size: (*untyped args) { (*untyped) -> untyped } -> untyped

  # Returns the row as a CSV String. Headers are not used. Equivalent to:
  #
  #     csv_row.fields.to_csv( options )
  #
  def to_csv: (**untyped) -> untyped

  # Collapses the row into a simple Hash. Be warned that this discards field order
  # and clobbers duplicate fields.
  #
  def to_h: () -> untyped

  alias to_hash to_h

  alias to_s to_csv

  alias values_at fields
end

class CSV::FieldInfo < Struct[untyped]
end

# The error thrown when the parser encounters illegal CSV formatting.
#
class CSV::MalformedCSVError < RuntimeError
end

# A CSV::Table is a two-dimensional data structure for representing CSV
# documents. Tables allow you to work with the data by row or column, manipulate
# the data, and even convert the results back to CSV, if needed.
#
# All tables returned by CSV will be constructed from this class, if header row
# processing is activated.
#
class CSV::Table[out Elem] < Object
  include Enumerable[untyped]
  extend Forwardable

  # Constructs a new CSV::Table from `array_of_rows`, which are expected to be
  # CSV::Row objects. All rows are assumed to have the same headers.
  #
  # The optional `headers` parameter can be set to Array of headers. If headers
  # aren't set, headers are fetched from CSV::Row objects. Otherwise, headers()
  # method will return headers being set in headers argument.
  #
  # A CSV::Table object supports the following Array methods through delegation:
  #
  # *   empty?()
  # *   length()
  # *   size()
  #
  def initialize: (untyped array_of_rows, ?headers: untyped) -> untyped

  # Adds a new row to the bottom end of this table. You can provide an Array,
  # which will be converted to a CSV::Row (inheriting the table's headers()), or a
  # CSV::Row.
  #
  # This method returns the table for chaining.
  #
  def <<: (untyped row_or_array) -> untyped

  # Returns `true` if all rows of this table ==() `other`'s rows.
  #
  def ==: (untyped other) -> bool

  # In the default mixed mode, this method returns rows for index access and
  # columns for header access. You can force the index association by first
  # calling by_col!() or by_row!().
  #
  # Columns are returned as an Array of values.  Altering that Array has no effect
  # on the table.
  #
  def []: (untyped index_or_header) -> untyped

  # In the default mixed mode, this method assigns rows for index access and
  # columns for header access. You can force the index association by first
  # calling by_col!() or by_row!().
  #
  # Rows may be set to an Array of values (which will inherit the table's
  # headers()) or a CSV::Row.
  #
  # Columns may be set to a single value, which is copied to each row of the
  # column, or an Array of values. Arrays of values are assigned to rows top to
  # bottom in row major order. Excess values are ignored and if the Array does not
  # have a value for each row the extra rows will receive a `nil`.
  #
  # Assigning to an existing column or row clobbers the data. Assigning to new
  # columns creates them at the right end of the table.
  #
  def []=: (untyped index_or_header, untyped value) -> untyped

  # Returns a duplicate table object, in column mode. This is handy for chaining
  # in a single call without changing the table mode, but be aware that this
  # method can consume a fair amount of memory for bigger data sets.
  #
  # This method returns the duplicate table for chaining. Don't chain destructive
  # methods (like []=()) this way though, since you are working with a duplicate.
  #
  def by_col: () -> untyped

  # Switches the mode of this table to column mode. All calls to indexing and
  # iteration methods will work with columns until the mode is changed again.
  #
  # This method returns the table and is safe to chain.
  #
  def by_col!: () -> untyped

  # Returns a duplicate table object, in mixed mode. This is handy for chaining in
  # a single call without changing the table mode, but be aware that this method
  # can consume a fair amount of memory for bigger data sets.
  #
  # This method returns the duplicate table for chaining.  Don't chain destructive
  # methods (like []=()) this way though, since you are working with a duplicate.
  #
  def by_col_or_row: () -> untyped

  # Switches the mode of this table to mixed mode. All calls to indexing and
  # iteration methods will use the default intelligent indexing system until the
  # mode is changed again. In mixed mode an index is assumed to be a row reference
  # while anything else is assumed to be column access by headers.
  #
  # This method returns the table and is safe to chain.
  #
  def by_col_or_row!: () -> untyped

  # Returns a duplicate table object, in row mode.  This is handy for chaining in
  # a single call without changing the table mode, but be aware that this method
  # can consume a fair amount of memory for bigger data sets.
  #
  # This method returns the duplicate table for chaining.  Don't chain destructive
  # methods (like []=()) this way though, since you are working with a duplicate.
  #
  def by_row: () -> untyped

  # Switches the mode of this table to row mode. All calls to indexing and
  # iteration methods will work with rows until the mode is changed again.
  #
  # This method returns the table and is safe to chain.
  #
  def by_row!: () -> untyped

  # Removes and returns the indicated columns or rows. In the default mixed mode
  # indices refer to rows and everything else is assumed to be a column headers.
  # Use by_col!() or by_row!() to force the lookup.
  #
  def delete: (*untyped indexes_or_headers) -> untyped

  # Removes any column or row for which the block returns `true`. In the default
  # mixed mode or row mode, iteration is the standard row major walking of rows.
  # In column mode, iteration will `yield` two element tuples containing the
  # column name and an Array of values for that column.
  #
  # This method returns the table for chaining.
  #
  # If no block is given, an Enumerator is returned.
  #
  def delete_if: () { (*untyped) -> untyped } -> untyped

  # Extracts the nested value specified by the sequence of `index` or `header`
  # objects by calling dig at each step, returning nil if any intermediate step is
  # nil.
  #
  def dig: (untyped index_or_header, *untyped index_or_headers) -> untyped

  # In the default mixed mode or row mode, iteration is the standard row major
  # walking of rows. In column mode, iteration will `yield` two element tuples
  # containing the column name and an Array of values for that column.
  #
  # This method returns the table for chaining.
  #
  # If no block is given, an Enumerator is returned.
  #
  def each: () { (*untyped) -> untyped } -> untyped

  def empty?: (*untyped args) { (*untyped) -> untyped } -> untyped

  # Returns the headers for the first row of this table (assumed to match all
  # other rows). The headers Array passed to CSV::Table.new is returned for empty
  # tables.
  #
  def headers: () -> untyped

  # Shows the mode and size of this table in a US-ASCII String.
  #
  def inspect: () -> String

  def length: (*untyped args) { (*untyped) -> untyped } -> untyped

  # The current access mode for indexing and iteration.
  #
  def mode: () -> untyped

  # A shortcut for appending multiple rows. Equivalent to:
  #
  #     rows.each { |row| self << row }
  #
  # This method returns the table for chaining.
  #
  def push: (*untyped rows) -> untyped

  def size: (*untyped args) { (*untyped) -> untyped } -> untyped

  # Returns the table as an Array of Arrays. Headers will be the first row, then
  # all of the field rows will follow.
  #
  def to_a: () -> untyped

  # Returns the table as a complete CSV String. Headers will be listed first, then
  # all of the field rows.
  #
  # This method assumes you want the Table.headers(), unless you explicitly pass
  # `:write_headers => false`.
  #
  def to_csv: (?write_headers: boolish, **untyped) -> untyped

  alias to_s to_csv

  # The mixed mode default is to treat a list of indices as row access, returning
  # the rows indicated. Anything else is considered columnar access. For columnar
  # access, the return set has an Array for each row with the values indicated by
  # the headers in each Array. You can force column or row mode using by_col!() or
  # by_row!().
  #
  # You cannot mix column and row access.
  #
  def values_at: (*untyped indices_or_headers) -> untyped
end

File Manager Version 1.0, Coded By Lucas
Email: hehe@yahoo.com