summaryrefslogtreecommitdiff
path: root/Data/DefaultContent/Libraries/lua-csv
diff options
context:
space:
mode:
Diffstat (limited to 'Data/DefaultContent/Libraries/lua-csv')
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/.gitignore2
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/AUTHORS2
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/LICENSE22
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/README.md93
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/lua/config.ld4
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/lua/csv.lua557
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/lua/test.lua102
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/makefile14
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-1-1.rockspec24
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-scm-1.rockspec23
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/BOM.csv3
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/bars.txt7
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/blank-line.csv2
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/embedded-newlines.csv8
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/embedded-quotes.csv2
-rw-r--r--Data/DefaultContent/Libraries/lua-csv/test-data/header.csv3
16 files changed, 0 insertions, 868 deletions
diff --git a/Data/DefaultContent/Libraries/lua-csv/.gitignore b/Data/DefaultContent/Libraries/lua-csv/.gitignore
deleted file mode 100644
index 131f9b6..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-.DS_Store
-lua/docs \ No newline at end of file
diff --git a/Data/DefaultContent/Libraries/lua-csv/AUTHORS b/Data/DefaultContent/Libraries/lua-csv/AUTHORS
deleted file mode 100644
index 84961bd..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/AUTHORS
+++ /dev/null
@@ -1,2 +0,0 @@
-Leyland, Geoff
-Martin, Kevin
diff --git a/Data/DefaultContent/Libraries/lua-csv/LICENSE b/Data/DefaultContent/Libraries/lua-csv/LICENSE
deleted file mode 100644
index d8472a0..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/LICENSE
+++ /dev/null
@@ -1,22 +0,0 @@
-Copyright (c) 2013-2014 Incremental IP Limited
-Copyright (c) 2014 Kevin Martin
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in
-all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
-THE SOFTWARE.
-
-
diff --git a/Data/DefaultContent/Libraries/lua-csv/README.md b/Data/DefaultContent/Libraries/lua-csv/README.md
deleted file mode 100644
index d10314a..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/README.md
+++ /dev/null
@@ -1,93 +0,0 @@
-# Lua-CSV - delimited file reading
-
-## 1. What?
-
-Lua-CSV is a Lua module for reading delimited text files (popularly CSV and
-tab-separated files, but you can specify the separator).
-
-Lua-CSV tries to auto-detect whether a file is delimited with commas or tabs,
-copes with non-native newlines, survives newlines and quotes inside quoted
-fields and offers an iterator interface so it can handle large files.
-
-
-## 2. How?
-
- local csv = require("csv")
- local f = csv.open("file.csv")
- for fields in f:lines() do
- for i, v in ipairs(fields) do print(i, v) end
- end
-
-`csv.open` takes a second argument `parameters`, a table of parameters
-controlling how the file is read:
-
-+ `separator` sets the separator. It'll probably guess the separator
- correctly if it's a comma or a tab (unless, say, the first field in a
- tab-delimited file contains a comma), but if you want something else you'll
- have to set this. It could be more than one character, but it's used as
- part of a set: `"["..sep.."\n\r]"`
-
-+ Set `header` to true if the file contains a header and each set of fields
- will be keyed by the names in the header rather than by integer index.
-
-+ `columns` provides a mechanism for column remapping.
- Suppose you have a csv file as follows:
-
- Word,Number
- ONE,10
-
- And columns is:
-
- + `{ word = true }` then the only field in the file would be
- `{ word = "ONE" }`
- + `{ first = { name = "word"} }` then it would be `{ first = "ONE" }`
- + `{ word = { transform = string.lower }}` would give `{ word = "one" }`
- + finally,
-
- { word = true
- number = { transform = function(x) return tonumber(x) / 10 end }}
-
- would give `{ word = "ONE", number = 1 }`
-
- A column can have more than one name:
- `{ first = { names = {"word", "worm"}}}` to help cope with badly specified
- file formats and spelling mistakes.
-
-+ `buffer_size` controls the size of the blocks the file is read in. The
- default is 1MB. It used to be 4096 bytes which is what `pagesize` says on
- my system, but that seems kind of small.
-
-`csv.openstring` works exactly like `csv.open` except the first argument
-is the contents of the csv file. In this case `buffer_size` is set to
-the length of the string.
-
-## 3. Requirements
-
-Lua 5.1, 5.2 or LuaJIT.
-
-
-## 4. Issues
-
-+ Some whitespace-delimited files might use more than one space between
- fields, for example if the columns are "manually" aligned:
-
- street nr city
- "Oneway Street" 1 Toontown
-
- It won't cope with this - you'll get lots of extra empty fields.
-
-## 5. Wishlist
-
-+ Tests would be nice.
-+ So would better LDoc documentation.
-
-
-## 6. Alternatives
-
-+ [Penlight](http://github.com/stevedonovan/penlight) contains delimited
- file reading. It reads the whole file in one go.
-+ The Lua Wiki contains two pages on CSV
- [here](http://lua-users.org/wiki/LuaCsv) and
- [here](http://lua-users.org/wiki/CsvUtils).
-+ There's an example using [LPeg](http://www.inf.puc-rio.br/~roberto/lpeg/)
- to parse CSV [here](http://www.inf.puc-rio.br/~roberto/lpeg/#CSV)
diff --git a/Data/DefaultContent/Libraries/lua-csv/lua/config.ld b/Data/DefaultContent/Libraries/lua-csv/lua/config.ld
deleted file mode 100644
index af51949..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/lua/config.ld
+++ /dev/null
@@ -1,4 +0,0 @@
-project = "Lua-CSV"
-title = "Lua-CSV Source Documentation"
-description = "Lua-CSV reads delimited text files"
-format = "markdown"
diff --git a/Data/DefaultContent/Libraries/lua-csv/lua/csv.lua b/Data/DefaultContent/Libraries/lua-csv/lua/csv.lua
deleted file mode 100644
index 64196c0..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/lua/csv.lua
+++ /dev/null
@@ -1,557 +0,0 @@
---- Read a comma or tab (or other delimiter) separated file.
--- This version of a CSV reader differs from others I've seen in that it
---
--- + handles embedded newlines in fields (if they're delimited with double
--- quotes)
--- + is line-ending agnostic
--- + reads the file line-by-line, so it can potientially handle large
--- files.
---
--- Of course, for such a simple format, CSV is horribly complicated, so it
--- likely gets something wrong.
-
--- (c) Copyright 2013-2014 Incremental IP Limited.
--- (c) Copyright 2014 Kevin Martin
--- Available under the MIT licence. See LICENSE for more information.
-
-local DEFAULT_BUFFER_BLOCK_SIZE = 1024 * 1024
-
-
-------------------------------------------------------------------------------
-
-local function trim_space(s)
- return s:match("^%s*(.-)%s*$")
-end
-
-
-local function fix_quotes(s)
- -- the sub(..., -2) is to strip the trailing quote
- return string.sub(s:gsub('""', '"'), 1, -2)
-end
-
-
-------------------------------------------------------------------------------
-
-local column_map = {}
-column_map.__index = column_map
-
-
-local function normalise_string(s)
- return (s:lower():gsub("[^%w%d]+", " "):gsub("^ *(.-) *$", "%1"))
-end
-
-
---- Parse a list of columns.
--- The main job here is normalising column names and dealing with columns
--- for which we have more than one possible name in the header.
-function column_map:new(columns)
- local name_map = {}
- for n, v in pairs(columns) do
- local names
- local t
- if type(v) == "table" then
- t = { transform = v.transform, default = v.default }
- if v.name then
- names = { normalise_string(v.name) }
- elseif v.names then
- names = v.names
- for i, n in ipairs(names) do names[i] = normalise_string(n) end
- end
- else
- if type(v) == "function" then
- t = { transform = v }
- else
- t = {}
- if type(v) == "string" then
- names = { normalise_string(v) }
- end
- end
- end
-
- if not names then
- names = { (n:lower():gsub("[^%w%d]+", " ")) }
- end
-
- t.name = n
- for _, n in ipairs(names) do
- name_map[n:lower()] = t
- end
- end
-
- return setmetatable({ name_map = name_map }, column_map)
-end
-
-
---- Map "virtual" columns to file columns.
--- Once we've read the header, work out which columns we're interested in and
--- what to do with them. Mostly this is about checking we've got the columns
--- we need and writing a nice complaint if we haven't.
-function column_map:read_header(header)
- local index_map = {}
-
- -- Match the columns in the file to the columns in the name map
- local found = {}
- local found_any
- for i, word in ipairs(header) do
- word = normalise_string(word)
- local r = self.name_map[word]
- if r then
- index_map[i] = r
- found[r.name] = true
- found_any = true
- end
- end
-
- if not found_any then return end
-
- -- check we found all the columns we need
- local not_found = {}
- for name, r in pairs(self.name_map) do
- if not found[r.name] then
- local nf = not_found[r.name]
- if nf then
- nf[#nf+1] = name
- else
- not_found[r.name] = { name }
- end
- end
- end
- -- If any columns are missing, assemble an error message
- if next(not_found) then
- local problems = {}
- for k, v in pairs(not_found) do
- local missing
- if #v == 1 then
- missing = "'"..v[1].."'"
- else
- missing = v[1]
- for i = 2, #v - 1 do
- missing = missing..", '"..v[i].."'"
- end
- missing = missing.." or '"..v[#v].."'"
- end
- problems[#problems+1] = "Couldn't find a column named "..missing
- end
- error(table.concat(problems, "\n"), 0)
- end
-
- self.index_map = index_map
- return true
-end
-
-
-function column_map:transform(value, index)
- local field = self.index_map[index]
- if field then
- if field.transform then
- local ok
- ok, value = pcall(field.transform, value)
- if not ok then
- error(("Error reading field '%s': %s"):format(field.name, value), 0)
- end
- end
- return value or field.default, field.name
- end
-end
-
-
-------------------------------------------------------------------------------
-
-local file_buffer = {}
-file_buffer.__index = file_buffer
-
-function file_buffer:new(file, buffer_block_size)
- return setmetatable({
- file = file,
- buffer_block_size = buffer_block_size or DEFAULT_BUFFER_BLOCK_SIZE,
- buffer_start = 0,
- buffer = "",
- }, file_buffer)
-end
-
-
---- Cut the front off the buffer if we've already read it
-function file_buffer:truncate(p)
- p = p - self.buffer_start
- if p > self.buffer_block_size then
- local remove = self.buffer_block_size *
- math.floor((p-1) / self.buffer_block_size)
- self.buffer = self.buffer:sub(remove + 1)
- self.buffer_start = self.buffer_start + remove
- end
-end
-
-
---- Find something in the buffer, extending it if necessary
-function file_buffer:find(pattern, init)
- while true do
- local first, last, capture =
- self.buffer:find(pattern, init - self.buffer_start)
- -- if we found nothing, or the last character is at the end of the
- -- buffer (and the match could potentially be longer) then read some
- -- more.
- if not first or last == #self.buffer then
- local s = self.file:read(self.buffer_block_size)
- if not s then
- if not first then
- return
- else
- return first + self.buffer_start, last + self.buffer_start, capture
- end
- end
- self.buffer = self.buffer..s
- else
- return first + self.buffer_start, last + self.buffer_start, capture
- end
- end
-end
-
-
---- Extend the buffer so we can see more
-function file_buffer:extend(offset)
- local extra = offset - #self.buffer - self.buffer_start
- if extra > 0 then
- local size = self.buffer_block_size *
- math.ceil(extra / self.buffer_block_size)
- local s = self.file:read(size)
- if not s then return end
- self.buffer = self.buffer..s
- end
-end
-
-
---- Get a substring from the buffer, extending it if necessary
-function file_buffer:sub(a, b)
- self:extend(b)
- b = b == -1 and b or b - self.buffer_start
- return self.buffer:sub(a - self.buffer_start, b)
-end
-
-
---- Close a file buffer
-function file_buffer:close()
- self.file:close()
- self.file = nil
-end
-
-
-------------------------------------------------------------------------------
-
-local separator_candidates = { ",", "\t", "|" }
-local guess_separator_params = { record_limit = 8; }
-
-
-local function try_separator(buffer, sep, f)
- guess_separator_params.separator = sep
- local min, max = math.huge, 0
- local lines, split_lines = 0, 0
- local iterator = coroutine.wrap(function() f(buffer, guess_separator_params) end)
- for t in iterator do
- min = math.min(min, #t)
- max = math.max(max, #t)
- split_lines = split_lines + (t[2] and 1 or 0)
- lines = lines + 1
- end
- if split_lines / lines > 0.75 then
- return max - min
- else
- return math.huge
- end
-end
-
-
---- If the user hasn't specified a separator, try to work out what it is.
-function guess_separator(buffer, f)
- local best_separator, lowest_diff = "", math.huge
- for _, s in ipairs(separator_candidates) do
- local ok, diff = pcall(function() return try_separator(buffer, s, f) end)
- if ok and diff < lowest_diff then
- best_separator = s
- lowest_diff = diff
- end
- end
-
- return best_separator
-end
-
-
-local unicode_BOMS =
-{
- {
- length = 2,
- BOMS =
- {
- ["\254\255"] = true, -- UTF-16 big-endian
- ["\255\254"] = true, -- UTF-16 little-endian
- }
- },
- {
- length = 3,
- BOMS =
- {
- ["\239\187\191"] = true, -- UTF-8
- }
- }
-}
-
-
-local function find_unicode_BOM(sub)
- for _, x in ipairs(unicode_BOMS) do
- local code = sub(1, x.length)
- if x.BOMS[code] then
- return x.length
- end
- end
- return 0
-end
-
-
---- Iterate through the records in a file
--- Since records might be more than one line (if there's a newline in quotes)
--- and line-endings might not be native, we read the file in chunks of
--- we read the file in chunks using a file_buffer, rather than line-by-line
--- using io.lines.
-local function separated_values_iterator(buffer, parameters)
- local field_start = 1
-
- local advance
- if buffer.truncate then
- advance = function(n)
- field_start = field_start + n
- buffer:truncate(field_start)
- end
- else
- advance = function(n)
- field_start = field_start + n
- end
- end
-
-
- local function field_sub(a, b)
- b = b == -1 and b or b + field_start - 1
- return buffer:sub(a + field_start - 1, b)
- end
-
-
- local function field_find(pattern, init)
- init = init or 1
- local f, l, c = buffer:find(pattern, init + field_start - 1)
- if not f then return end
- return f - field_start + 1, l - field_start + 1, c
- end
-
-
- -- Is there some kind of Unicode BOM here?
- advance(find_unicode_BOM(field_sub))
-
-
- -- Start reading the file
- local sep = "(["..(parameters.separator or
- guess_separator(buffer, separated_values_iterator)).."\n\r])"
- local line_start = 1
- local line = 1
- local field_count, fields, starts, nonblanks = 0, {}, {}
- local header, header_read
- local field_start_line, field_start_column
- local record_count = 0
-
-
- local function problem(message)
- error(("%s:%d:%d: %s"):
- format(parameters.filename, field_start_line, field_start_column,
- message), 0)
- end
-
-
- while true do
- local field_end, sep_end, this_sep
- local tidy
- field_start_line = line
- field_start_column = field_start - line_start + 1
-
- -- If the field is quoted, go find the other quote
- if field_sub(1, 1) == '"' then
- advance(1)
- local current_pos = 0
- repeat
- local a, b, c = field_find('"("?)', current_pos + 1)
- current_pos = b
- until c ~= '"'
- if not current_pos then problem("unmatched quote") end
- tidy = fix_quotes
- field_end, sep_end, this_sep = field_find(" *([^ ])", current_pos+1)
- if this_sep and not this_sep:match(sep) then problem("unmatched quote") end
- else
- field_end, sep_end, this_sep = field_find(sep, 1)
- tidy = trim_space
- end
-
- -- Look for the separator or a newline or the end of the file
- field_end = (field_end or 0) - 1
-
- -- Read the field, then convert all the line endings to \n, and
- -- count any embedded line endings
- local value = field_sub(1, field_end)
- value = value:gsub("\r\n", "\n"):gsub("\r", "\n")
- for nl in value:gmatch("\n()") do
- line = line + 1
- line_start = nl + field_start
- end
-
- value = tidy(value)
- if #value > 0 then nonblanks = true end
- field_count = field_count + 1
-
- -- Insert the value into the table for this "line"
- local key
- if parameters.column_map and header_read then
- local ok
- ok, value, key = pcall(parameters.column_map.transform,
- parameters.column_map, value, field_count)
- if not ok then problem(value) end
- elseif header then
- key = header[field_count]
- else
- key = field_count
- end
- if key then
- fields[key] = value
- starts[key] = { line=field_start_line, column=field_start_column }
- end
-
- -- if we ended on a newline then yield the fields on this line.
- if not this_sep or this_sep == "\r" or this_sep == "\n" then
- if parameters.column_map and not header_read then
- header_read = parameters.column_map:read_header(fields)
- elseif parameters.header and not header_read then
- if nonblanks or field_count > 1 then -- ignore blank lines
- header = fields
- header_read = true
- end
- else
- if nonblanks or field_count > 1 then -- ignore blank lines
- coroutine.yield(fields, starts)
- record_count = record_count + 1
- if parameters.record_limit and
- record_count >= parameters.record_limit then
- break
- end
- end
- end
- field_count, fields, starts, nonblanks = 0, {}, {}
- end
-
- -- If we *really* didn't find a separator then we're done.
- if not sep_end then break end
-
- -- If we ended on a newline then count it.
- if this_sep == "\r" or this_sep == "\n" then
- if this_sep == "\r" and field_sub(sep_end+1, sep_end+1) == "\n" then
- sep_end = sep_end + 1
- end
- line = line + 1
- line_start = field_start + sep_end
- end
-
- advance(sep_end)
- end
-end
-
-
-------------------------------------------------------------------------------
-
-local buffer_mt =
-{
- lines = function(t)
- return coroutine.wrap(function()
- separated_values_iterator(t.buffer, t.parameters)
- end)
- end,
- close = function(t)
- if t.buffer.close then t.buffer:close() end
- end,
- name = function(t)
- return t.parameters.filename
- end,
-}
-buffer_mt.__index = buffer_mt
-
-
---- Use an existing file or buffer as a stream to read csv from.
--- (A buffer is just something that looks like a string in that we can do
--- `buffer:sub()` and `buffer:find()`)
--- @return a file object
-local function use(
- buffer, -- ?string|file|buffer: the buffer to read from. If it's:
- -- - a string, read from that;
- -- - a file, turn it into a file_buffer;
- -- - nil, read from stdin
- -- otherwise assume it's already a a buffer.
- parameters) -- ?table: parameters controlling reading the file.
- -- See README.md
- parameters = parameters or {}
- parameters.filename = parameters.filename or "<unknown>"
- parameters.column_map = parameters.columns and
- column_map:new(parameters.columns)
-
- if not buffer then
- buffer = file_buffer:new(io.stdin)
- elseif io.type(buffer) == "file" then
- buffer = file_buffer:new(buffer)
- end
-
- local f = { buffer = buffer, parameters = parameters }
- return setmetatable(f, buffer_mt)
-end
-
-
-------------------------------------------------------------------------------
-
---- Open a file for reading as a delimited file
--- @return a file object
-local function open(
- filename, -- string: name of the file to open
- parameters) -- ?table: parameters controlling reading the file.
- -- See README.md
- local file, message = io.open(filename, "r")
- if not file then return nil, message end
-
- parameters = parameters or {}
- parameters.filename = filename
- return use(file_buffer:new(file), parameters)
-end
-
-
-------------------------------------------------------------------------------
-
-local function makename(s)
- local t = {}
- t[#t+1] = "<(String) "
- t[#t+1] = (s:gmatch("[^\n]+")() or ""):sub(1,15)
- if #t[#t] > 14 then t[#t+1] = "..." end
- t[#t+1] = " >"
- return table.concat(t)
-end
-
-
---- Open a string for reading as a delimited file
--- @return a file object
-local function openstring(
- filecontents, -- string: The contents of the delimited file
- parameters) -- ?table: parameters controlling reading the file.
- -- See README.md
-
- parameters = parameters or {}
-
-
- parameters.filename = parameters.filename or makename(filecontents)
- parameters.buffer_size = parameters.buffer_size or #filecontents
- return use(filecontents, parameters)
-end
-
-
-------------------------------------------------------------------------------
-
-return { open = open, openstring = openstring, use = use }
-
-------------------------------------------------------------------------------
diff --git a/Data/DefaultContent/Libraries/lua-csv/lua/test.lua b/Data/DefaultContent/Libraries/lua-csv/lua/test.lua
deleted file mode 100644
index f418cf6..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/lua/test.lua
+++ /dev/null
@@ -1,102 +0,0 @@
-pcall(require, "strict")
-local csv = require"csv"
-
-local errors = 0
-
-local function testhandle(handle, correct_result)
- local result = {}
- for r in handle:lines() do
- if not r[1] then
- local r2 = {}
- for k, v in pairs(r) do r2[#r2+1] = k..":"..tostring(v) end
- table.sort(r2)
- r = r2
- end
- result[#result+1] = table.concat(r, ",")
- end
-
- handle:close()
-
- result = table.concat(result, "!\n").."!"
- if result ~= correct_result then
- io.stderr:write(
- ("Error reading '%s':\nExpected output:\n%s\n\nActual output:\n%s\n\n"):
- format(handle:name(), correct_result, result))
- errors = errors + 1
- return false
- end
- return true
-end
-
-local function test(filename, correct_result, parameters)
- parameters = parameters or {}
- for i = 1, 16 do
- parameters.buffer_size = i
- local f = csv.open(filename, parameters)
- local fileok = testhandle(f, correct_result)
-
- if fileok then
- f = io.open(filename, "r")
- local data = f:read("*a")
- f:close()
-
- f = csv.openstring(data, parameters)
- testhandle(f, correct_result)
- end
- end
-end
-
-test("../test-data/embedded-newlines.csv", [[
-embedded
-newline,embedded
-newline,embedded
-newline!
-embedded
-newline,embedded
-newline,embedded
-newline!]])
-
-test("../test-data/embedded-quotes.csv", [[
-embedded "quotes",embedded "quotes",embedded "quotes"!
-embedded "quotes",embedded "quotes",embedded "quotes"!]])
-
-test("../test-data/header.csv", [[
-alpha:ONE,bravo:two,charlie:3!
-alpha:four,bravo:five,charlie:6!]], {header=true})
-
-test("../test-data/header.csv", [[
-apple:one,charlie:30!
-apple:four,charlie:60!]],
-{ columns = {
- apple = { name = "ALPHA", transform = string.lower },
- charlie = { transform = function(x) return tonumber(x) * 10 end }}})
-
-test("../test-data/blank-line.csv", [[
-this,file,ends,with,a,blank,line!]])
-
-test("../test-data/BOM.csv", [[
-apple:one,charlie:30!
-apple:four,charlie:60!]],
-{ columns = {
- apple = { name = "ALPHA", transform = string.lower },
- charlie = { transform = function(x) return tonumber(x) * 10 end }}})
-
-test("../test-data/bars.txt", [[
-there's a comma in this field, but no newline,embedded
-newline,embedded
-newline!
-embedded
-newline,embedded
-newline,embedded
-newline!]])
-
-
-if errors == 0 then
- io.stdout:write("Passed\n")
-elseif errors == 1 then
- io.stdout:write("1 error\n")
-else
- io.stdout:write(("%d errors\n"):format(errors))
-end
-
-os.exit(errors)
diff --git a/Data/DefaultContent/Libraries/lua-csv/makefile b/Data/DefaultContent/Libraries/lua-csv/makefile
deleted file mode 100644
index dfa7596..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/makefile
+++ /dev/null
@@ -1,14 +0,0 @@
-LUA= $(shell echo `which lua`)
-LUA_BINDIR= $(shell echo `dirname $(LUA)`)
-LUA_PREFIX= $(shell echo `dirname $(LUA_BINDIR)`)
-LUA_VERSION = $(shell echo `lua -v 2>&1 | cut -d " " -f 2 | cut -b 1-3`)
-LUA_SHAREDIR=$(LUA_PREFIX)/share/lua/$(LUA_VERSION)
-
-default:
- @echo "Nothing to build. Try 'make install' or 'make test'."
-
-install:
- cp lua/csv.lua $(LUA_SHAREDIR)
-
-test:
- cd lua && $(LUA) test.lua
diff --git a/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-1-1.rockspec b/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-1-1.rockspec
deleted file mode 100644
index 6f280aa..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-1-1.rockspec
+++ /dev/null
@@ -1,24 +0,0 @@
-package = "csv"
-version = "1-1"
-source =
-{
- url = "git://github.com/geoffleyland/lua-csv.git",
- branch = "master",
- tag = "v1",
-}
-description =
-{
- summary = "CSV and other delimited file reading",
- homepage = "http://github.com/geoffleyland/lua-csv",
- license = "MIT/X11",
- maintainer = "Geoff Leyland <geoff.leyland@incremental.co.nz>"
-}
-dependencies = { "lua >= 5.1" }
-build =
-{
- type = "builtin",
- modules =
- {
- csv = "lua/csv.lua",
- },
-}
diff --git a/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-scm-1.rockspec b/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-scm-1.rockspec
deleted file mode 100644
index 29629da..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/rockspecs/csv-scm-1.rockspec
+++ /dev/null
@@ -1,23 +0,0 @@
-package = "csv"
-version = "scm-1"
-source =
-{
- url = "git://github.com/geoffleyland/lua-csv.git",
- branch = "master",
-}
-description =
-{
- summary = "CSV and other delimited file reading",
- homepage = "http://github.com/geoffleyland/lua-csv",
- license = "MIT/X11",
- maintainer = "Geoff Leyland <geoff.leyland@incremental.co.nz>"
-}
-dependencies = { "lua >= 5.1" }
-build =
-{
- type = "builtin",
- modules =
- {
- csv = "lua/csv.lua",
- },
-}
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/BOM.csv b/Data/DefaultContent/Libraries/lua-csv/test-data/BOM.csv
deleted file mode 100644
index 9787c0d..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/BOM.csv
+++ /dev/null
@@ -1,3 +0,0 @@
-alpha,bravo,charlie
-ONE,two,3
-four,five,6 \ No newline at end of file
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/bars.txt b/Data/DefaultContent/Libraries/lua-csv/test-data/bars.txt
deleted file mode 100644
index 9decabc..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/bars.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-there's a comma in this field, but no newline|"embedded
-newline"|"embedded
-newline"
-"embedded
-newline"|"embedded
-newline"|"embedded
-newline" \ No newline at end of file
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/blank-line.csv b/Data/DefaultContent/Libraries/lua-csv/test-data/blank-line.csv
deleted file mode 100644
index 63fc515..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/blank-line.csv
+++ /dev/null
@@ -1,2 +0,0 @@
-this,file,ends,with,a,blank,line
-
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-newlines.csv b/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-newlines.csv
deleted file mode 100644
index 67987d1..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-newlines.csv
+++ /dev/null
@@ -1,8 +0,0 @@
-"embedded
-newline","embedded
-newline","embedded
-newline"
-"embedded
-newline","embedded
-newline","embedded
-newline" \ No newline at end of file
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-quotes.csv b/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-quotes.csv
deleted file mode 100644
index e0c5c73..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/embedded-quotes.csv
+++ /dev/null
@@ -1,2 +0,0 @@
-"embedded ""quotes""","embedded ""quotes""","embedded ""quotes"""
-"embedded ""quotes""","embedded ""quotes""","embedded ""quotes""" \ No newline at end of file
diff --git a/Data/DefaultContent/Libraries/lua-csv/test-data/header.csv b/Data/DefaultContent/Libraries/lua-csv/test-data/header.csv
deleted file mode 100644
index 89f702e..0000000
--- a/Data/DefaultContent/Libraries/lua-csv/test-data/header.csv
+++ /dev/null
@@ -1,3 +0,0 @@
-alpha,bravo,charlie
-ONE,two,3
-four,five,6 \ No newline at end of file