This is useful for diagnosing problems with functions that fail to parse correctly.

count_fields(file, tokenizer, skip = 0, n_max = -1L)

Arguments

file

Either a path to a file, a connection, or literal data (either a single string or a raw vector).

Files ending in .gz, .bz2, .xz, or .zip will be automatically uncompressed. Files starting with http://, https://, ftp://, or ftps:// will be automatically downloaded. Remote gz files can also be automatically downloaded and decompressed.

Literal data is most useful for examples and tests. It must contain at least one new line to be recognised as data (instead of a path) or be a vector of greater than length 1.

Using a value of clipboard() will read from the system clipboard.

tokenizer

A tokenizer that specifies how to break the file up into fields, e.g., tokenizer_csv(), tokenizer_fwf()

skip

Number of lines to skip before reading data.

n_max

Optionally, maximum number of rows to count fields for.

Examples

count_fields(readr_example("mtcars.csv"), tokenizer_csv())
#> [1] 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 #> [26] 11 11 11 11 11 11 11 11