From 946ffb8fc6e992278ed6938323340e8d4720d96c Mon Sep 17 00:00:00 2001 From: rbbowd Date: Fri, 22 Sep 2023 13:32:26 -0400 Subject: [PATCH 01/15] Add ci - check-spelling - secret-scanner - build --- .../secret-scanner/excluded_files.patterns | 4 + .../secret-scanner/excluded_lines.patterns | 5 + .../secret-scanner/excluded_secrets.patterns | 0 .github/actions/spelling/README.md | 17 + .github/actions/spelling/advice.md | 31 + .github/actions/spelling/allow.txt | 5 + .github/actions/spelling/candidate.patterns | 625 ++++++++++++++++++ .github/actions/spelling/excludes.txt | 83 +++ .github/actions/spelling/expect.txt | 0 .../actions/spelling/line_forbidden.patterns | 113 ++++ .github/actions/spelling/patterns.txt | 32 + .github/actions/spelling/reject.txt | 11 + .github/dependabot.yml | 13 + .github/workflows/ci.yml | 55 ++ .github/workflows/detect-new-secrets.yml | 12 + .github/workflows/no-ci.yml | 37 ++ .github/workflows/spelling.yml | 132 ++++ .gitignore | 35 +- 18 files changed, 1199 insertions(+), 11 deletions(-) create mode 100644 .github/actions/secret-scanner/excluded_files.patterns create mode 100644 .github/actions/secret-scanner/excluded_lines.patterns create mode 100644 .github/actions/secret-scanner/excluded_secrets.patterns create mode 100644 .github/actions/spelling/README.md create mode 100644 .github/actions/spelling/advice.md create mode 100644 .github/actions/spelling/allow.txt create mode 100644 .github/actions/spelling/candidate.patterns create mode 100644 .github/actions/spelling/excludes.txt create mode 100644 .github/actions/spelling/expect.txt create mode 100644 .github/actions/spelling/line_forbidden.patterns create mode 100644 .github/actions/spelling/patterns.txt create mode 100644 .github/actions/spelling/reject.txt create mode 100644 .github/dependabot.yml create mode 100644 .github/workflows/ci.yml create mode 100644 .github/workflows/detect-new-secrets.yml create mode 100644 .github/workflows/no-ci.yml create mode 100644 .github/workflows/spelling.yml diff --git a/.github/actions/secret-scanner/excluded_files.patterns b/.github/actions/secret-scanner/excluded_files.patterns new file mode 100644 index 00000000..c5829150 --- /dev/null +++ b/.github/actions/secret-scanner/excluded_files.patterns @@ -0,0 +1,4 @@ +# Sealed secrets +.*-sealed\.json$ +.*-sealed\.yml$ +.*-sealed\.yaml$ diff --git a/.github/actions/secret-scanner/excluded_lines.patterns b/.github/actions/secret-scanner/excluded_lines.patterns new file mode 100644 index 00000000..daf2f874 --- /dev/null +++ b/.github/actions/secret-scanner/excluded_lines.patterns @@ -0,0 +1,5 @@ +# Image tags +^.*tag.*$ + +# Secrets we don't care about +[\"\']?googleMapsApiKey[\"\']?: [\"\']?\w+[\"\']? diff --git a/.github/actions/secret-scanner/excluded_secrets.patterns b/.github/actions/secret-scanner/excluded_secrets.patterns new file mode 100644 index 00000000..e69de29b diff --git a/.github/actions/spelling/README.md b/.github/actions/spelling/README.md new file mode 100644 index 00000000..1f699f3d --- /dev/null +++ b/.github/actions/spelling/README.md @@ -0,0 +1,17 @@ +# check-spelling/check-spelling configuration + +File | Purpose | Format | Info +-|-|-|- +[dictionary.txt](dictionary.txt) | Replacement dictionary (creating this file will override the default dictionary) | one word per line | [dictionary](https://github.com/check-spelling/check-spelling/wiki/Configuration#dictionary) +[allow.txt](allow.txt) | Add words to the dictionary | one word per line (only letters and `'`s allowed) | [allow](https://github.com/check-spelling/check-spelling/wiki/Configuration#allow) +[reject.txt](reject.txt) | Remove words from the dictionary (after allow) | grep pattern matching whole dictionary words | [reject](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-reject) +[excludes.txt](excludes.txt) | Files to ignore entirely | perl regular expression | [excludes](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-excludes) +[only.txt](only.txt) | Only check matching files (applied after excludes) | perl regular expression | [only](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-only) +[patterns.txt](patterns.txt) | Patterns to ignore from checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) +[candidate.patterns](candidate.patterns) | Patterns that might be worth adding to [patterns.txt](patterns.txt) | perl regular expression with optional comment block introductions (all matches will be suggested) | [candidates](https://github.com/check-spelling/check-spelling/wiki/Feature:-Suggest-patterns) +[line_forbidden.patterns](line_forbidden.patterns) | Patterns to flag in checked lines | perl regular expression (order matters, first match wins) | [patterns](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-patterns) +[expect.txt](expect.txt) | Expected words that aren't in the dictionary | one word per line (sorted, alphabetically) | [expect](https://github.com/check-spelling/check-spelling/wiki/Configuration#expect) +[advice.md](advice.md) | Supplement for GitHub comment when unrecognized words are found | GitHub Markdown | [advice](https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples%3A-advice) + +Note: you can replace any of these files with a directory by the same name (minus the suffix) +and then include multiple files inside that directory (with that suffix) to merge multiple files together. diff --git a/.github/actions/spelling/advice.md b/.github/actions/spelling/advice.md new file mode 100644 index 00000000..a32d1090 --- /dev/null +++ b/.github/actions/spelling/advice.md @@ -0,0 +1,31 @@ + +
If the flagged items are :exploding_head: false positives + +If items relate to a ... +* binary file (or some other file you wouldn't want to check at all). + + Please add a file path to the `excludes.txt` file matching the containing file. + + File paths are Perl 5 Regular Expressions - you can [test]( +https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your files. + + `^` refers to the file's path from the root of the repository, so `^README\.md$` would exclude [README.md]( +../tree/HEAD/README.md) (on whichever branch you're using). + +* well-formed pattern. + + If you can write a [pattern]( +https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples:-patterns +) that would match it, + try adding it to the `patterns.txt` file. + + Patterns are Perl 5 Regular Expressions - you can [test]( +https://www.regexplanet.com/advanced/perl/) yours before committing to verify it will match your lines. + + Note that patterns can't match multiline strings. + +
+ + +:steam_locomotive: If you're seeing this message and your PR is from a branch that doesn't have check-spelling, +please merge to your PR's base branch to get the version configured for your repository. diff --git a/.github/actions/spelling/allow.txt b/.github/actions/spelling/allow.txt new file mode 100644 index 00000000..61567618 --- /dev/null +++ b/.github/actions/spelling/allow.txt @@ -0,0 +1,5 @@ +github +https +ssh +ubuntu +workarounds diff --git a/.github/actions/spelling/candidate.patterns b/.github/actions/spelling/candidate.patterns new file mode 100644 index 00000000..309b7b71 --- /dev/null +++ b/.github/actions/spelling/candidate.patterns @@ -0,0 +1,625 @@ +# marker to ignore all code on line +^.*/\* #no-spell-check-line \*/.*$ +# marker to ignore all code on line +^.*\bno-spell-check(?:-line|)(?:\s.*|)$ + +# https://cspell.org/configuration/document-settings/ +# cspell inline +^.*\b[Cc][Ss][Pp][Ee][Ll]{2}:\s*[Dd][Ii][Ss][Aa][Bb][Ll][Ee]-[Ll][Ii][Nn][Ee]\b + +# patch hunk comments +^\@\@ -\d+(?:,\d+|) \+\d+(?:,\d+|) \@\@ .* +# git index header +index (?:[0-9a-z]{7,40},|)[0-9a-z]{7,40}\.\.[0-9a-z]{7,40} + +# css url wrappings +\burl\([^)]*\) + +# cid urls +(['"])cid:.*?\g{-1} + +# data url in parens +\(data:(?:[^) ][^)]*?|)(?:[A-Z]{3,}|[A-Z][a-z]{2,}|[a-z]{3,})[^)]*\) +# data url in quotes +([`'"])data:(?:[^ `'"].*?|)(?:[A-Z]{3,}|[A-Z][a-z]{2,}|[a-z]{3,}).*\g{-1} +# data url +data:[-a-zA-Z=;:/0-9+]*,\S* + +# https/http/file urls +(?:\b(?:https?|ftp|file)://)[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|] + +# mailto urls +mailto:[-a-zA-Z=;:/?%&0-9+@.]{3,} + +# magnet urls +magnet:[?=:\w]+ + +# magnet urls +"magnet:[^"]+" + +# obs: +"obs:[^"]*" + +# The `\b` here means a break, it's the fancy way to handle urls, but it makes things harder to read +# In this examples content, I'm using a number of different ways to match things to show various approaches +# asciinema +\basciinema\.org/a/[0-9a-zA-Z]+ + +# asciinema v2 +^\[\d+\.\d+, "[io]", ".*"\]$ + +# apple +\bdeveloper\.apple\.com/[-\w?=/]+ +# Apple music +\bembed\.music\.apple\.com/fr/playlist/usr-share/[-\w.]+ + +# appveyor api +\bci\.appveyor\.com/api/projects/status/[0-9a-z]+ +# appveyor project +\bci\.appveyor\.com/project/(?:[^/\s"]*/){2}builds?/\d+/job/[0-9a-z]+ + +# Amazon + +# Amazon +\bamazon\.com/[-\w]+/(?:dp/[0-9A-Z]+|) +# AWS S3 +\b\w*\.s3[^.]*\.amazonaws\.com/[-\w/&#%_?:=]* +# AWS execute-api +\b[0-9a-z]{10}\.execute-api\.[-0-9a-z]+\.amazonaws\.com\b +# AWS ELB +\b\w+\.[-0-9a-z]+\.elb\.amazonaws\.com\b +# AWS SNS +\bsns\.[-0-9a-z]+.amazonaws\.com/[-\w/&#%_?:=]* +# AWS VPC +vpc-\w+ + +# While you could try to match `http://` and `https://` by using `s?` in `https?://`, sometimes there +# YouTube url +\b(?:(?:www\.|)youtube\.com|youtu.be)/(?:channel/|embed/|user/|playlist\?list=|watch\?v=|v/|)[-a-zA-Z0-9?&=_%]* +# YouTube music +\bmusic\.youtube\.com/youtubei/v1/browse(?:[?&]\w+=[-a-zA-Z0-9?&=_]*) +# YouTube tag +<\s*youtube\s+id=['"][-a-zA-Z0-9?_]*['"] +# YouTube image +\bimg\.youtube\.com/vi/[-a-zA-Z0-9?&=_]* +# Google Accounts +\baccounts.google.com/[-_/?=.:;+%&0-9a-zA-Z]* +# Google Analytics +\bgoogle-analytics\.com/collect.[-0-9a-zA-Z?%=&_.~]* +# Google APIs +\bgoogleapis\.(?:com|dev)/[a-z]+/(?:v\d+/|)[a-z]+/[-@:./?=\w+|&]+ +# Google Storage +\b[-a-zA-Z0-9.]*\bstorage\d*\.googleapis\.com(?:/\S*|) +# Google Calendar +\bcalendar\.google\.com/calendar(?:/u/\d+|)/embed\?src=[@./?=\w&%]+ +\w+\@group\.calendar\.google\.com\b +# Google DataStudio +\bdatastudio\.google\.com/(?:(?:c/|)u/\d+/|)(?:embed/|)(?:open|reporting|datasources|s)/[-0-9a-zA-Z]+(?:/page/[-0-9a-zA-Z]+|) +# The leading `/` here is as opposed to the `\b` above +# ... a short way to match `https://` or `http://` since most urls have one of those prefixes +# Google Docs +/docs\.google\.com/[a-z]+/(?:ccc\?key=\w+|(?:u/\d+|d/(?:e/|)[0-9a-zA-Z_-]+/)?(?:edit\?[-\w=#.]*|/\?[\w=&]*|)) +# Google Drive +\bdrive\.google\.com/(?:file/d/|open)[-0-9a-zA-Z_?=]* +# Google Groups +\bgroups\.google\.com(?:/[a-z]+/(?:#!|)[^/\s"]+)* +# Google Maps +\bmaps\.google\.com/maps\?[\w&;=]* +# Google themes +themes\.googleusercontent\.com/static/fonts/[^/\s"]+/v\d+/[^.]+. +# Google CDN +\bclients2\.google(?:usercontent|)\.com[-0-9a-zA-Z/.]* +# Goo.gl +/goo\.gl/[a-zA-Z0-9]+ +# Google Chrome Store +\bchrome\.google\.com/webstore/detail/[-\w]*(?:/\w*|) +# Google Books +\bgoogle\.(?:\w{2,4})/books(?:/\w+)*\?[-\w\d=&#.]* +# Google Fonts +\bfonts\.(?:googleapis|gstatic)\.com/[-/?=:;+&0-9a-zA-Z]* +# Google Forms +\bforms\.gle/\w+ +# Google Scholar +\bscholar\.google\.com/citations\?user=[A-Za-z0-9_]+ +# Google Colab Research Drive +\bcolab\.research\.google\.com/drive/[-0-9a-zA-Z_?=]* + +# GitHub SHAs (api) +\bapi.github\.com/repos(?:/[^/\s"]+){3}/[0-9a-f]+\b +# GitHub SHAs (markdown) +(?:\[`?[0-9a-f]+`?\]\(https:/|)/(?:www\.|)github\.com(?:/[^/\s"]+){2,}(?:/[^/\s")]+)(?:[0-9a-f]+(?:[-0-9a-zA-Z/#.]*|)\b|) +# GitHub SHAs +\bgithub\.com(?:/[^/\s"]+){2}[@#][0-9a-f]+\b +# GitHub wiki +\bgithub\.com/(?:[^/]+/){2}wiki/(?:(?:[^/]+/|)_history|[^/]+(?:/_compare|)/[0-9a-f.]{40,})\b +# githubusercontent +/[-a-z0-9]+\.githubusercontent\.com/[-a-zA-Z0-9?&=_\/.]* +# githubassets +\bgithubassets.com/[0-9a-f]+(?:[-/\w.]+) +# gist github +\bgist\.github\.com/[^/\s"]+/[0-9a-f]+ +# git.io +\bgit\.io/[0-9a-zA-Z]+ +# GitHub JSON +"node_id": "[-a-zA-Z=;:/0-9+_]*" +# Contributor +\[[^\]]+\]\(https://github\.com/[^/\s"]+/?\) +# GHSA +GHSA(?:-[0-9a-z]{4}){3} + +# GitLab commit +\bgitlab\.[^/\s"]*/\S+/\S+/commit/[0-9a-f]{7,16}#[0-9a-f]{40}\b +# GitLab merge requests +\bgitlab\.[^/\s"]*/\S+/\S+/-/merge_requests/\d+/diffs#[0-9a-f]{40}\b +# GitLab uploads +\bgitlab\.[^/\s"]*/uploads/[-a-zA-Z=;:/0-9+]* +# GitLab commits +\bgitlab\.[^/\s"]*/(?:[^/\s"]+/){2}commits?/[0-9a-f]+\b + +# binance +accounts\.binance\.com/[a-z/]*oauth/authorize\?[-0-9a-zA-Z&%]* + +# bitbucket diff +\bapi\.bitbucket\.org/\d+\.\d+/repositories/(?:[^/\s"]+/){2}diff(?:stat|)(?:/[^/\s"]+){2}:[0-9a-f]+ +# bitbucket repositories commits +\bapi\.bitbucket\.org/\d+\.\d+/repositories/(?:[^/\s"]+/){2}commits?/[0-9a-f]+ +# bitbucket commits +\bbitbucket\.org/(?:[^/\s"]+/){2}commits?/[0-9a-f]+ + +# bit.ly +\bbit\.ly/\w+ + +# bitrise +\bapp\.bitrise\.io/app/[0-9a-f]*/[\w.?=&]* + +# bootstrapcdn.com +\bbootstrapcdn\.com/[-./\w]+ + +# cdn.cloudflare.com +\bcdnjs\.cloudflare\.com/[./\w]+ + +# circleci +\bcircleci\.com/gh(?:/[^/\s"]+){1,5}.[a-z]+\?[-0-9a-zA-Z=&]+ + +# gitter +\bgitter\.im(?:/[^/\s"]+){2}\?at=[0-9a-f]+ + +# gravatar +\bgravatar\.com/avatar/[0-9a-f]+ + +# ibm +[a-z.]*ibm\.com/[-_#=:%!?~.\\/\d\w]* + +# imgur +\bimgur\.com/[^.]+ + +# Internet Archive +\barchive\.org/web/\d+/(?:[-\w.?,'/\\+&%$#_:]*) + +# discord +/discord(?:app\.com|\.gg)/(?:invite/)?[a-zA-Z0-9]{7,} + +# Disqus +\bdisqus\.com/[-\w/%.()!?&=_]* + +# medium link +\blink\.medium\.com/[a-zA-Z0-9]+ +# medium +\bmedium\.com/\@?[^/\s"]+/[-\w]+ + +# microsoft +\b(?:https?://|)(?:(?:download\.visualstudio|docs|msdn2?|research)\.microsoft|blogs\.msdn)\.com/[-_a-zA-Z0-9()=./%]* +# powerbi +\bapp\.powerbi\.com/reportEmbed/[^"' ]* +# vs devops +\bvisualstudio.com(?::443|)/[-\w/?=%&.]* +# microsoft store +\bmicrosoft\.com/store/apps/\w+ + +# mvnrepository.com +\bmvnrepository\.com/[-0-9a-z./]+ + +# now.sh +/[0-9a-z-.]+\.now\.sh\b + +# oracle +\bdocs\.oracle\.com/[-0-9a-zA-Z./_?#&=]* + +# chromatic.com +/\S+.chromatic.com\S*[")] + +# codacy +\bapi\.codacy\.com/project/badge/Grade/[0-9a-f]+ + +# compai +\bcompai\.pub/v1/png/[0-9a-f]+ + +# mailgun api +\.api\.mailgun\.net/v3/domains/[0-9a-z]+\.mailgun.org/messages/[0-9a-zA-Z=@]* +# mailgun +\b[0-9a-z]+.mailgun.org + +# /message-id/ +/message-id/[-\w@./%]+ + +# Reddit +\breddit\.com/r/[/\w_]* + +# requestb.in +\brequestb\.in/[0-9a-z]+ + +# sched +\b[a-z0-9]+\.sched\.com\b + +# Slack url +slack://[a-zA-Z0-9?&=]+ +# Slack +\bslack\.com/[-0-9a-zA-Z/_~?&=.]* +# Slack edge +\bslack-edge\.com/[-a-zA-Z0-9?&=%./]+ +# Slack images +\bslack-imgs\.com/[-a-zA-Z0-9?&=%.]+ + +# shields.io +\bshields\.io/[-\w/%?=&.:+;,]* + +# stackexchange -- https://stackexchange.com/feeds/sites +\b(?:askubuntu|serverfault|stack(?:exchange|overflow)|superuser).com/(?:questions/\w+/[-\w]+|a/) + +# Sentry +[0-9a-f]{32}\@o\d+\.ingest\.sentry\.io\b + +# Twitter markdown +\[\@[^[/\]:]*?\]\(https://twitter.com/[^/\s"')]*(?:/status/\d+(?:\?[-_0-9a-zA-Z&=]*|)|)\) +# Twitter hashtag +\btwitter\.com/hashtag/[\w?_=&]* +# Twitter status +\btwitter\.com/[^/\s"')]*(?:/status/\d+(?:\?[-_0-9a-zA-Z&=]*|)|) +# Twitter profile images +\btwimg\.com/profile_images/[_\w./]* +# Twitter media +\btwimg\.com/media/[-_\w./?=]* +# Twitter link shortened +\bt\.co/\w+ + +# facebook +\bfburl\.com/[0-9a-z_]+ +# facebook CDN +\bfbcdn\.net/[\w/.,]* +# facebook watch +\bfb\.watch/[0-9A-Za-z]+ + +# dropbox +\bdropbox\.com/sh?/[^/\s"]+/[-0-9A-Za-z_.%?=&;]+ + +# ipfs protocol +ipfs://[0-9a-zA-Z]{3,} +# ipfs url +/ipfs/[0-9a-zA-Z]{3,} + +# w3 +\bw3\.org/[-0-9a-zA-Z/#.]+ + +# loom +\bloom\.com/embed/[0-9a-f]+ + +# regex101 +\bregex101\.com/r/[^/\s"]+/\d+ + +# figma +\bfigma\.com/file(?:/[0-9a-zA-Z]+/)+ + +# freecodecamp.org +\bfreecodecamp\.org/[-\w/.]+ + +# image.tmdb.org +\bimage\.tmdb\.org/[/\w.]+ + +# mermaid +\bmermaid\.ink/img/[-\w]+|\bmermaid-js\.github\.io/mermaid-live-editor/#/edit/[-\w]+ + +# Wikipedia +\ben\.wikipedia\.org/wiki/[-\w%.#]+ + +# gitweb +[^"\s]+/gitweb/\S+;h=[0-9a-f]+ + +# HyperKitty lists +/archives/list/[^@/]+\@[^/\s"]*/message/[^/\s"]*/ + +# lists +/thread\.html/[^"\s]+ + +# list-management +\blist-manage\.com/subscribe(?:[?&](?:u|id)=[0-9a-f]+)+ + +# kubectl.kubernetes.io/last-applied-configuration +"kubectl.kubernetes.io/last-applied-configuration": ".*" + +# pgp +\bgnupg\.net/pks/lookup[?&=0-9a-zA-Z]* + +# Spotify +\bopen\.spotify\.com/embed/playlist/\w+ + +# Mastodon +\bmastodon\.[-a-z.]*/(?:media/|\@)[?&=0-9a-zA-Z_]* + +# scastie +\bscastie\.scala-lang\.org/[^/]+/\w+ + +# images.unsplash.com +\bimages\.unsplash\.com/(?:(?:flagged|reserve)/|)[-\w./%?=%&.;]+ + +# pastebin +\bpastebin\.com/[\w/]+ + +# heroku +\b\w+\.heroku\.com/source/archive/\w+ + +# quip +\b\w+\.quip\.com/\w+(?:(?:#|/issues/)\w+)? + +# badgen.net +\bbadgen\.net/badge/[^")\]'\s]+ + +# statuspage.io +\w+\.statuspage\.io\b + +# media.giphy.com +\bmedia\.giphy\.com/media/[^/]+/[\w.?&=]+ + +# tinyurl +\btinyurl\.com/\w+ + +# codepen +\bcodepen\.io/[\w/]+ + +# registry.npmjs.org +\bregistry\.npmjs\.org/(?:@[^/"']+/|)[^/"']+/-/[-\w@.]+ + +# getopts +\bgetopts\s+(?:"[^"]+"|'[^']+') + +# ANSI color codes +(?:\\(?:u00|x)1[Bb]|\x1b|\\u\{1[Bb]\})\[\d+(?:;\d+|)m + +# URL escaped characters +\%[0-9A-F][A-F] +# lower URL escaped characters +\%[0-9a-f][a-f](?=[a-z]{2,}) +# IPv6 +\b(?:[0-9a-fA-F]{0,4}:){3,7}[0-9a-fA-F]{0,4}\b +# c99 hex digits (not the full format, just one I've seen) +0x[0-9a-fA-F](?:\.[0-9a-fA-F]*|)[pP] +# Punycode +\bxn--[-0-9a-z]+ +# sha +sha\d+:[0-9]*[a-f]{3,}[0-9a-f]* +# sha-... -- uses a fancy capture +(\\?['"]|")[0-9a-f]{40,}\g{-1} +# hex runs +\b[0-9a-fA-F]{16,}\b +# hex in url queries +=[0-9a-fA-F]*?(?:[A-F]{3,}|[a-f]{3,})[0-9a-fA-F]*?& +# ssh +(?:ssh-\S+|-nistp256) [-a-zA-Z=;:/0-9+]{12,} + +# PGP +\b(?:[0-9A-F]{4} ){9}[0-9A-F]{4}\b +# GPG keys +\b(?:[0-9A-F]{4} ){5}(?: [0-9A-F]{4}){5}\b +# Well known gpg keys +.well-known/openpgpkey/[\w./]+ + +# pki +-----BEGIN.*-----END + +# uuid: +\b[0-9a-fA-F]{8}-(?:[0-9a-fA-F]{4}-){3}[0-9a-fA-F]{12}\b +# hex digits including css/html color classes: +(?:[\\0][xX]|\\u|[uU]\+|#x?|\%23)[0-9_a-fA-FgGrR]*?[a-fA-FgGrR]{2,}[0-9_a-fA-FgGrR]*(?:[uUlL]{0,3}|[iu]\d+)\b +# integrity +integrity=(['"])(?:\s*sha\d+-[-a-zA-Z=;:/0-9+]{40,})+\g{-1} + +# https://www.gnu.org/software/groff/manual/groff.html +# man troff content +\\f[BCIPR] +# '/" +\\\([ad]q + +# .desktop mime types +^MimeTypes?=.*$ +# .desktop localized entries +^[A-Z][a-z]+\[[a-z]+\]=.*$ +# Localized .desktop content +Name\[[^\]]+\]=.* + +# IServiceProvider / isAThing +\b(?:I|isA)(?=(?:[A-Z][a-z]{2,})+\b) + +# crypt +(['"])\$2[ayb]\$.{56}\g{-1} + +# scrypt / argon +\$(?:scrypt|argon\d+[di]*)\$\S+ + +# go.sum +\bh1:\S+ + +# scala modules +("[^"]+"\s*%%?\s*){2,3}"[^"]+" + +# Input to GitHub JSON +content: (['"])[-a-zA-Z=;:/0-9+]*=\g{-1} + +# This does not cover multiline strings, if your repository has them, +# you'll want to remove the `(?=.*?")` suffix. +# The `(?=.*?")` suffix should limit the false positives rate +# printf +%(?:(?:(?:hh?|ll?|[jzt])?[diuoxn]|l?[cs]|L?[fega]|p)(?=[a-z]{2,})|(?:X|L?[FEGA]|p)(?=[a-zA-Z]{2,}))(?=[_a-zA-Z]+\b)(?!%)(?=.*?['"]) + +# Python string prefix / binary prefix +# Note that there's a high false positive rate, remove the `?=` and search for the regex to see if the matches seem like reasonable strings +(?|m([|!/@#,;']).*?\g{-1}) + +# perl qr regex +(?|\(.*?\)|([|!/@#,;']).*?\g{-1}) + +# Go regular expressions +regexp?\.MustCompile\(`[^`]*`\) + +# regex choice +\(\?:[^)]+\|[^)]+\) + +# proto +^\s*(\w+)\s\g{-1} = + +# sed regular expressions +sed 's/(?:[^/]*?[a-zA-Z]{3,}[^/]*?/){2} + +# node packages +(["'])\@[^/'" ]+/[^/'" ]+\g{-1} + +# go install +go install(?:\s+[a-z]+\.[-@\w/.]+)+ + +# kubernetes pod status lists +# https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase +\w+(?:-\w+)+\s+\d+/\d+\s+(?:Running|Pending|Succeeded|Failed|Unknown)\s+ + +# kubectl - pods in CrashLoopBackOff +\w+-[0-9a-f]+-\w+\s+\d+/\d+\s+CrashLoopBackOff\s+ + +# kubernetes object suffix +-[0-9a-f]{10}-\w{5}\s + +# posthog secrets +posthog\.init\((['"])phc_[^"',]+\g{-1}, + +# xcode + +# xcodeproject scenes +(?:Controller|destination|ID|id)="\w{3}-\w{2}-\w{3}" + +# xcode api botches +customObjectInstantitationMethod + +# configure flags +.* \| --\w{2,}.*?(?=\w+\s\w+) + +# font awesome classes +\.fa-[-a-z0-9]+ + +# bearer auth +(['"])Bear[e][r] .*?\g{-1} + +# basic auth +(['"])Basic [-a-zA-Z=;:/0-9+]{3,}\g{-1} + +# base64 encoded content +([`'"])[-a-zA-Z=;:/0-9+]+=\g{-1} +# base64 encoded content in xml/sgml +>[-a-zA-Z=;:/0-9+]+== 0.0.22) +\\\w{2,}\{ + +# eslint +"varsIgnorePattern": ".+" + +# Windows short paths +[/\\][^/\\]{5,6}~\d{1,2}[/\\] + +# in a version of check-spelling after @0.0.21 printf markers won't be automatically consumed +# printf markers +(?v# +(?:(?<=[A-Z]{2})V|(?<=[a-z]{2}|[A-Z]{2})v)\d+(?:\b|(?=[a-zA-Z_])) + +# Compiler flags (Unix, Java/Scala) +# Use if you have things like `-Pdocker` and want to treat them as `docker` +(?:^|[\t ,>"'`=(])-(?:(?:J-|)[DPWXY]|[Llf])(?=[A-Z]{2,}|[A-Z][a-z]|[a-z]{2,}) + +# Compiler flags (Windows / PowerShell) +# This is a subset of the more general compiler flags pattern. +# It avoids matching `-Path` to prevent it from being treated as `ath` +(?:^|[\t ,"'`=(])-(?:[DPL](?=[A-Z]{2,})|[WXYlf](?=[A-Z]{2,}|[A-Z][a-z]|[a-z]{2,})) + +# Compiler flags (linker) +,-B + +# curl arguments +\b(?:\\n|)curl(?:\s+-[a-zA-Z]{1,2}\b)*(?:\s+-[a-zA-Z]{3,})(?:\s+-[a-zA-Z]+)* +# set arguments +\bset(?:\s+-[abefimouxE]{1,2})*\s+-[abefimouxE]{3,}(?:\s+-[abefimouxE]+)* +# tar arguments +\b(?:\\n|)g?tar(?:\.exe|)(?:(?:\s+--[-a-zA-Z]+|\s+-[a-zA-Z]+|\s[ABGJMOPRSUWZacdfh-pr-xz]+\b)(?:=[^ ]*|))+ +# tput arguments -- https://man7.org/linux/man-pages/man5/terminfo.5.html -- technically they can be more than 5 chars long... +\btput\s+(?:(?:-[SV]|-T\s*\w+)\s+)*\w{3,5}\b +# macOS temp folders +/var/folders/\w\w/[+\w]+/(?:T|-Caches-)/ diff --git a/.github/actions/spelling/excludes.txt b/.github/actions/spelling/excludes.txt new file mode 100644 index 00000000..5977f03f --- /dev/null +++ b/.github/actions/spelling/excludes.txt @@ -0,0 +1,83 @@ +# See https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples:-excludes +(?:^|/)(?i)COPYRIGHT +(?:^|/)(?i)LICEN[CS]E +(?:^|/)3rdparty/ +(?:^|/)go\.sum$ +(?:^|/)package(?:-lock|)\.json$ +(?:^|/)Pipfile$ +(?:^|/)pyproject.toml +(?:^|/)requirements(?:-dev|-doc|-test|)\.txt$ +(?:^|/)vendor/ +ignore$ +\.a$ +\.ai$ +\.all-contributorsrc$ +\.avi$ +\.bmp$ +\.bz2$ +\.cer$ +\.class$ +\.coveragerc$ +\.crl$ +\.crt$ +\.csr$ +\.dll$ +\.docx?$ +\.drawio$ +\.DS_Store$ +\.eot$ +\.eps$ +\.exe$ +\.gif$ +\.git-blame-ignore-revs$ +\.gitattributes$ +\.gitkeep$ +\.graffle$ +\.gz$ +\.icns$ +\.ico$ +\.ipynb$ +\.jar$ +\.jks$ +\.jpe?g$ +\.key$ +\.lib$ +\.lock$ +\.map$ +\.min\.. +\.mo$ +\.mod$ +\.mp[34]$ +\.o$ +\.ocf$ +\.otf$ +\.p12$ +\.parquet$ +\.pdf$ +\.pem$ +\.pfx$ +\.png$ +\.psd$ +\.pyc$ +\.pylintrc$ +\.qm$ +\.s$ +\.sig$ +\.so$ +\.svgz?$ +\.sys$ +\.tar$ +\.tgz$ +\.tiff?$ +\.ttf$ +\.wav$ +\.webm$ +\.webp$ +\.woff2?$ +\.xcf$ +\.xlsx?$ +\.xpm$ +\.xz$ +\.zip$ +^\.github/actions/spelling/ +^\Q.github/workflows/spelling.yml\E$ diff --git a/.github/actions/spelling/expect.txt b/.github/actions/spelling/expect.txt new file mode 100644 index 00000000..e69de29b diff --git a/.github/actions/spelling/line_forbidden.patterns b/.github/actions/spelling/line_forbidden.patterns new file mode 100644 index 00000000..4c4a6777 --- /dev/null +++ b/.github/actions/spelling/line_forbidden.patterns @@ -0,0 +1,113 @@ +# reject `m_data` as VxWorks defined it and that breaks things if it's used elsewhere +# see [fprime](https://github.com/nasa/fprime/commit/d589f0a25c59ea9a800d851ea84c2f5df02fb529) +# and [Qt](https://github.com/qtproject/qt-solutions/blame/fb7bc42bfcc578ff3fa3b9ca21a41e96eb37c1c7/qtscriptclassic/src/qscriptbuffer_p.h#L46) +# \bm_data\b + +# If you have a framework that uses `it()` for testing and `fit()` for debugging a specific test, +# you might not want to check in code where you were debugging w/ `fit()`, in which case, you might want +# to use this: +#\bfit\( + +# s.b. anymore +\bany more[,.] + +# s.b. GitHub +(?]*>|[^<]*)\s*$ + +# Autogenerated revert commit message +^This reverts commit [0-9a-f]{40}\.$ + +# ignore long runs of a single character: +\b([A-Za-z])\g{-1}{3,}\b diff --git a/.github/actions/spelling/reject.txt b/.github/actions/spelling/reject.txt new file mode 100644 index 00000000..e5e4c3ee --- /dev/null +++ b/.github/actions/spelling/reject.txt @@ -0,0 +1,11 @@ +^attache$ +^bellow$ +benefitting +occurences? +^dependan.* +^oer$ +Sorce +^[Ss]pae.* +^untill$ +^untilling$ +^wether.* diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 00000000..4996c270 --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,13 @@ +# To get started with Dependabot version updates, you'll need to specify which +# package ecosystems to update and where the package manifests are located. +# Please see the documentation for all configuration options: +# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates + +version: 2 +updates: +- package-ecosystem: github-actions + directory: "/" + schedule: + interval: weekly + timezone: America/Toronto + open-pull-requests-limit: 10 diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 00000000..e4c439e4 --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,55 @@ +name: CI +# If you update paths, make sure to update them in no-ci.yml as well +on: + push: + branches: + - master + paths-ignore: + - .github/actions + - "README.md" + pull_request: + paths-ignore: + - .github/actions + - "README.md" + +permissions: + contents: read + +env: + REPOSITORY: "gcr.io/helical-crowbar-220917" + PROJECT_ID: "helical-crowbar-220917" + PLATFORMS: "linux/amd64" + prometheus_java_agent_version: 0.12.0 + PR_NUMBER: ${{ github.event.pull_request.number || github.ref_name }} + NEXUS_USER: "jenkins" + NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" + +concurrency: + group: ${{ github.workflow_ref }} + cancel-in-progress: true + +jobs: + build-and-publish: + name: Build libraries and image + runs-on: ubuntu-latest + steps: + - name: Build w/ sbt + uses: garnercorp/build-actions/scala@main + with: + maven-username: ${{ env.NEXUS_USER }} + maven-password: ${{ env.NEXUS_PASSWORD }} + copy-prometheus-to: lighthouse-jobs + jars-to-simplify: lighthouse-jobs + prometheus-url: https://nexus.garnercorp.com/repository/raw/jmx_prometheus_javaagent-${{ env.prometheus_java_agent_version }}.jar + build-script: scripts/build-scala-ci.sh + build-script-args: ${{ env.PR_NUMBER }} + container-project: ${{ env.PROJECT_ID }} + google-credentials-json: ${{ secrets.GCR_JSON_KEY }} + google-cloud-sdk-version: ${{ vars.GCLOUD_SDK_VERSION }} + + - name: Build and Publish Jobs image + uses: garnercorp/build-actions/image@main + with: + container-project: ${{ env.PROJECT_ID }} + platforms: ${{ env.PLATFORMS }} + working-directory: "lighthouse-jobs" diff --git a/.github/workflows/detect-new-secrets.yml b/.github/workflows/detect-new-secrets.yml new file mode 100644 index 00000000..b6ea524e --- /dev/null +++ b/.github/workflows/detect-new-secrets.yml @@ -0,0 +1,12 @@ +name: Checking for Secrets +on: [push] + +jobs: + check-secrets: + name: Checking for Secrets + runs-on: ubuntu-latest + steps: + - name: Checkout Configuration + uses: actions/checkout@v4 + - name: Secret Scanner + uses: secret-scanner/action@bf855b904a8bca17a334986797650dacec7ed529 diff --git a/.github/workflows/no-ci.yml b/.github/workflows/no-ci.yml new file mode 100644 index 00000000..4fddec6f --- /dev/null +++ b/.github/workflows/no-ci.yml @@ -0,0 +1,37 @@ +name: Lighthouse-Jobs CI +# If you update paths, make sure to update them in ci.yml as well +on: + push: + branches: + - master + paths: + - .github/actions + - "README.md" + pull_request: + paths: + - .github/actions + - "README.md" + +permissions: + contents: read + +env: + REPOSITORY: "gcr.io/helical-crowbar-220917" + PROJECT_ID: "helical-crowbar-220917" + PLATFORMS: "linux/amd64" + prometheus_java_agent_version: 0.12.0 + NEXUS_USER: "jenkins" + NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" + +jobs: + e2e: + name: Trigger E2E Test and Deploy + runs-on: ubuntu-latest + permissions: + contents: none + steps: + - name: Stub Lighthouse-Jobs Build + run: | + echo "Files outside of the Lighthouse-Jobs CI workflow have changed. + This workflow has an equivalently named required check for those files, + so this one exists to pass that check in the case that none of those files were changed." diff --git a/.github/workflows/spelling.yml b/.github/workflows/spelling.yml new file mode 100644 index 00000000..8d54aad9 --- /dev/null +++ b/.github/workflows/spelling.yml @@ -0,0 +1,132 @@ +name: Check Spelling + +# Comment management is handled through a secondary job, for details see: +# https://github.com/check-spelling/check-spelling/wiki/Feature%3A-Restricted-Permissions +# +# `jobs.comment-push` runs when a push is made to a repository and the `jobs.spelling` job needs to make a comment +# (in odd cases, it might actually run just to collapse a comment, but that's fairly rare) +# it needs `contents: write` in order to add a comment. +# +# `jobs.comment-pr` runs when a pull_request is made to a repository and the `jobs.spelling` job needs to make a comment +# or collapse a comment (in the case where it had previously made a comment and now no longer needs to show a comment) +# it needs `pull-requests: write` in order to manipulate those comments. + +# Updating pull request branches is managed via comment handling. +# For details, see: https://github.com/check-spelling/check-spelling/wiki/Feature:-Update-expect-list +# +# These elements work together to make it happen: +# +# `on.issue_comment` +# This event listens to comments by users asking to update the metadata. +# +# `jobs.update` +# This job runs in response to an issue_comment and will push a new commit +# to update the spelling metadata. +# +# `with.experimental_apply_changes_via_bot` +# Tells the action to support and generate messages that enable it +# to make a commit to update the spelling metadata. +# +# `with.ssh_key` +# In order to trigger workflows when the commit is made, you can provide a +# secret (typically, a write-enabled github deploy key). +# +# For background, see: https://github.com/check-spelling/check-spelling/wiki/Feature:-Update-with-deploy-key + +# Sarif reporting +# +# Access to Sarif reports is generally restricted (by GitHub) to members of the repository. +# +# Requires enabling `security-events: write` +# and configuring the action with `use_sarif: 1` +# +# For information on the feature, see: https://github.com/check-spelling/check-spelling/wiki/Feature:-Sarif-output + +# Minimal workflow structure: +# +# on: +# push: +# ... +# pull_request_target: +# ... +# jobs: +# # you only want the spelling job, all others should be omitted +# spelling: +# # remove `security-events: write` and `use_sarif: 1` +# # remove `experimental_apply_changes_via_bot: 1` +# ... otherwise adjust the `with:` as you wish + +on: + push: + branches: + - "**" + tags-ignore: + - "**" + pull_request_target: + branches: + - "**" + types: + - 'opened' + - 'reopened' + - 'synchronize' + issue_comment: + types: + - 'created' + +jobs: + spelling: + name: Check Spelling + permissions: + contents: read + pull-requests: read + actions: read + security-events: write + outputs: + followup: ${{ steps.spelling.outputs.followup }} + runs-on: ubuntu-latest + if: ${{ contains(github.event_name, 'pull_request') || github.event_name == 'push' }} + concurrency: + group: spelling-${{ github.event.pull_request.number || github.ref }} + # note: If you use only_check_changed_files, you do not want cancel-in-progress + cancel-in-progress: true + steps: + - name: check-spelling + id: spelling + uses: check-spelling/check-spelling@prerelease + with: + suppress_push_for_open_pull_request: 1 + checkout: true + check_file_names: 1 + spell_check_this: check-spelling/spell-check-this@prerelease + post_comment: 0 + use_magic_file: 1 + report-timing: 1 + warnings: bad-regex,binary-file,deprecated-feature,large-file,limited-references,no-newline-at-eof,noisy-file,non-alpha-in-dictionary,token-is-substring,unexpected-line-ending,whitespace-in-dictionary,minified-file,unsupported-configuration,no-files-to-check + experimental_apply_changes_via_bot: 1 + use_sarif: ${{ (!github.event.pull_request || (github.event.pull_request.head.repo.full_name == github.repository)) && 1 }} + extra_dictionary_limit: 20 + extra_dictionaries: + cspell:software-terms/dict/softwareTerms.txt + + update: + name: Update PR + permissions: + contents: write + pull-requests: write + actions: read + runs-on: ubuntu-latest + if: ${{ + github.event_name == 'issue_comment' && + github.event.issue.pull_request && + contains(github.event.comment.body, '@check-spelling-bot apply') + }} + concurrency: + group: spelling-update-${{ github.event.issue.number }} + cancel-in-progress: false + steps: + - name: apply spelling updates + uses: check-spelling/check-spelling@prerelease + with: + experimental_apply_changes_via_bot: 1 + checkout: true + ssh_key: "${{ secrets.CHECK_SPELLING }}" diff --git a/.gitignore b/.gitignore index ddccbf4e..81495bfb 100644 --- a/.gitignore +++ b/.gitignore @@ -1,15 +1,28 @@ -target/ -.idea/ -# vim -*.sw? +.idea -.vscode/ +target + +logs .DS_Store -**/.DS_Store -# Ignore [ce]tags files -tags -.metals -.bloop -metals.sbt +.env + +*.sc + +*.jar + +**/venv/ + +demo.conf + +jmx_prometheus_javaagent* + +### Files generated by the VSCode editor and extensions ### +.bloop/ +.metals/ +.vscode/ +project/.bloop/ +project/metals.sbt +project/project/ +/.bsp/sbt.json From 66fe055a75e14861f0c107d6953b2b72d582b947 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Fri, 22 Sep 2023 13:37:15 -0400 Subject: [PATCH 02/15] Updating baseline file --- .secrets.baseline | 1 + 1 file changed, 1 insertion(+) create mode 100644 .secrets.baseline diff --git a/.secrets.baseline b/.secrets.baseline new file mode 100644 index 00000000..8b137891 --- /dev/null +++ b/.secrets.baseline @@ -0,0 +1 @@ + From a867fe59cf79ce4ce3f91c295586fc15e6d7d9f7 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Fri, 22 Sep 2023 14:22:36 -0400 Subject: [PATCH 03/15] tweak ci --- .github/actions/spelling/advice.md | 4 - .github/actions/spelling/excludes.txt | 4 +- .github/actions/spelling/expect.txt | 51 +++++++++++ .github/actions/spelling/patterns.txt | 26 ++++++ .github/workflows/spelling.yml | 3 + .secrets.baseline | 126 +++++++++++++++++++++++++- CHANGELOG.md | 35 ------- CODE_OF_CONDUCT.md | 2 +- 8 files changed, 209 insertions(+), 42 deletions(-) delete mode 100644 CHANGELOG.md diff --git a/.github/actions/spelling/advice.md b/.github/actions/spelling/advice.md index a32d1090..84eb9218 100644 --- a/.github/actions/spelling/advice.md +++ b/.github/actions/spelling/advice.md @@ -25,7 +25,3 @@ https://www.regexplanet.com/advanced/perl/) yours before committing to verify it Note that patterns can't match multiline strings. - - -:steam_locomotive: If you're seeing this message and your PR is from a branch that doesn't have check-spelling, -please merge to your PR's base branch to get the version configured for your repository. diff --git a/.github/actions/spelling/excludes.txt b/.github/actions/spelling/excludes.txt index 5977f03f..6241a102 100644 --- a/.github/actions/spelling/excludes.txt +++ b/.github/actions/spelling/excludes.txt @@ -8,7 +8,6 @@ (?:^|/)pyproject.toml (?:^|/)requirements(?:-dev|-doc|-test|)\.txt$ (?:^|/)vendor/ -ignore$ \.a$ \.ai$ \.all-contributorsrc$ @@ -80,4 +79,7 @@ ignore$ \.xz$ \.zip$ ^\.github/actions/spelling/ +^\Q.github/actions/secret-scanner/excluded_secrets.patterns\E$ ^\Q.github/workflows/spelling.yml\E$ +^\Q.secrets.baseline\E$ +ignore$ diff --git a/.github/actions/spelling/expect.txt b/.github/actions/spelling/expect.txt index e69de29b..46311f7e 100644 --- a/.github/actions/spelling/expect.txt +++ b/.github/actions/spelling/expect.txt @@ -0,0 +1,51 @@ +amd +atto +bbb +Bordowitz +ccc +chris +chrisdavenport +christopherdavenport +chuusai +comple +contramap +davenverse +Defn +Delims +dquote +Folat +garnercorp +GCLOUD +gcr +google +gormorant +hlist +hlw +hnil +jenkins +labelledread +labelledwrite +mergify +microsite +munit +nel +readlabelled +sbt +scalacheck +scalafmt +scalameta +semiauto +Seperator +som +Sonatype +TEXTDATA +timepit +tpolecat +tsv +Typeclass +typelevel +uncons +workflows +writelabelled +yyy +zzz diff --git a/.github/actions/spelling/patterns.txt b/.github/actions/spelling/patterns.txt index b9106f45..62d765b8 100644 --- a/.github/actions/spelling/patterns.txt +++ b/.github/actions/spelling/patterns.txt @@ -1,5 +1,31 @@ # See https://github.com/check-spelling/check-spelling/wiki/Configuration-Examples:-patterns +# Automatically suggested patterns +# hit-count: 16 file-count: 3 +# scala modules +("[^"]+"\s*%%?\s*){2,3}"[^"]+" + +# hit-count: 13 file-count: 6 +# https/http/file urls +(?:\b(?:https?|ftp|file)://)[-A-Za-z0-9+&@#/%?=~_|!:,.;]+[-A-Za-z0-9+&@#/%=~_|] + +# hit-count: 2 file-count: 1 +# in a version of check-spelling after @0.0.21 printf markers won't be automatically consumed +# printf markers +(?Unreleased Changes - -## New and Noteworthy for Version 0.2.0-M3 - -- http4s 0.20.0-M5 -- specs2 4.3.6 -- Ruby Version fix for CI -- github4s -- sbt-release - -## New and Noteworthy for Version 0.2.0-M2 - -- TSV Parser -- cats-core 1.5.0 -- cats-effect 1.1.0 -- fs2 1.0.2 -- refined 0.9.3 -- Http4s 0.20.0-M4 - -## New and Noteworthy for Version 0.2.0-M1 - -- cats-effect 1.0 -- fs2 1.0 -- http4s 0.20.0-M1 - -## New and Noteworthy for Version 0.1.0-M1 - -- Switched Model to use NonEmptyList as fully empty Headers/Rows are not allowed -- Fixed a bug in parsing CSV's with trailing newlines due to ambiguous specification diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index b59ed9cb..7baf20f3 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -8,6 +8,6 @@ Everyone is expected to follow the [Scala Code of Conduct] when discussing the p Any questions, concerns, or moderation requests please contact a member of the project. -- [Christopher Davenport](mailto:chris@christopherdavenport.tech) +- [Garner](mailto:gormorant@opensource.garnercorp.com) [Scala Code of Conduct]: https://www.scala-lang.org/conduct/ From 193fae33b7abef71913debd0c6d10cf77165475d Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 08:50:53 -0400 Subject: [PATCH 04/15] updating ci --- .github/workflows/ci.yml | 8 ++-- .github/workflows/no-ci.yml | 37 +++++++++--------- .scalafmt.conf | 30 ++------------- .../io/chrisdavenport/cormorant/Printer.scala | 38 ++++++++++++------- 4 files changed, 50 insertions(+), 63 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index e4c439e4..3d85c06a 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -17,10 +17,12 @@ permissions: env: REPOSITORY: "gcr.io/helical-crowbar-220917" + IMAGE_NAME: "gormorant" PROJECT_ID: "helical-crowbar-220917" PLATFORMS: "linux/amd64" prometheus_java_agent_version: 0.12.0 PR_NUMBER: ${{ github.event.pull_request.number || github.ref_name }} + COMMIT_SHA: "${{ github.event.pull_request.head.sha || github.sha }}" NEXUS_USER: "jenkins" NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" @@ -38,8 +40,8 @@ jobs: with: maven-username: ${{ env.NEXUS_USER }} maven-password: ${{ env.NEXUS_PASSWORD }} - copy-prometheus-to: lighthouse-jobs - jars-to-simplify: lighthouse-jobs + copy-prometheus-to: gormorant + jars-to-simplify: gormorant prometheus-url: https://nexus.garnercorp.com/repository/raw/jmx_prometheus_javaagent-${{ env.prometheus_java_agent_version }}.jar build-script: scripts/build-scala-ci.sh build-script-args: ${{ env.PR_NUMBER }} @@ -52,4 +54,4 @@ jobs: with: container-project: ${{ env.PROJECT_ID }} platforms: ${{ env.PLATFORMS }} - working-directory: "lighthouse-jobs" + working-directory: "gormorant" diff --git a/.github/workflows/no-ci.yml b/.github/workflows/no-ci.yml index 4fddec6f..a624036c 100644 --- a/.github/workflows/no-ci.yml +++ b/.github/workflows/no-ci.yml @@ -1,4 +1,4 @@ -name: Lighthouse-Jobs CI +name: CI # If you update paths, make sure to update them in ci.yml as well on: push: @@ -13,25 +13,24 @@ on: - "README.md" permissions: - contents: read + contents: read env: - REPOSITORY: "gcr.io/helical-crowbar-220917" - PROJECT_ID: "helical-crowbar-220917" - PLATFORMS: "linux/amd64" - prometheus_java_agent_version: 0.12.0 - NEXUS_USER: "jenkins" - NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" + REPOSITORY: "gcr.io/helical-crowbar-220917" + PROJECT_ID: "helical-crowbar-220917" + PLATFORMS: "linux/amd64" + prometheus_java_agent_version: 0.12.0 + NEXUS_USER: "jenkins" + NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" jobs: - e2e: - name: Trigger E2E Test and Deploy - runs-on: ubuntu-latest - permissions: - contents: none - steps: - - name: Stub Lighthouse-Jobs Build - run: | - echo "Files outside of the Lighthouse-Jobs CI workflow have changed. - This workflow has an equivalently named required check for those files, - so this one exists to pass that check in the case that none of those files were changed." + ready-to-merge: + name: Ready to merge + runs-on: ubuntu-latest + permissions: + contents: none + steps: + - name: Stub gormorant Build + run: | + echo "Files outside of the lighthouse-queues workflow have changed. This workflow has an equivalently named required check for those files, + so this one exists to pass that check in the case that none of those files were changed." diff --git a/.scalafmt.conf b/.scalafmt.conf index 47de4be0..cbc6f471 100644 --- a/.scalafmt.conf +++ b/.scalafmt.conf @@ -1,27 +1,3 @@ -# tune this file as appropriate to your style! see: https://olafurpg.github.io/scalafmt/#Configuration - -version=2.6.4 - -maxColumn = 100 - -continuationIndent.callSite = 2 - -newlines { - sometimesBeforeColonInMethodReturnType = false -} - -align { - arrowEnumeratorGenerator = false - ifWhileOpenParen = false - openParenCallSite = false - openParenDefnSite = false - - tokens = ["%", "%%"] -} - -docstrings = JavaDoc - -rewrite { - rules = [SortImports, RedundantBraces] - redundantBraces.maxLines = 1 -} +version=3.8.1 +runner.dialect = "scala213" +trailingCommas = preserve diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala index 3fdc85a3..c38b4811 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala @@ -19,30 +19,40 @@ object Printer { if (stringsToEscape.exists(string.contains(_))) { val escapedString = string.replace(surround, escape + surround) surround + escapedString + surround - } else { + } else string - } } - def generic( columnSeperator: String, rowSeperator: String, escape: String, surround: String, - additionalEscapes: Set[String] = Set.empty[String]): Printer = + additionalEscapes: Set[String] = Set.empty[String] + ): Printer = new Printer { - override def print(csv: CSV): String = csv match { - case CSV.Field(text) => - escapedAsNecessary(text, Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, escape, surround) - case CSV.Header(text) => - escapedAsNecessary(text, Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, escape, surround) - case CSV.Row(xs) => xs.map(print).intercalate(columnSeperator) - case CSV.Headers(xs) => xs.map(print).intercalate(columnSeperator) - case CSV.Rows(xs) => xs.map(print).intercalate(rowSeperator) - case CSV.Complete(headers, body) => print(headers) + rowSeperator + print(body) - } + override def print(csv: CSV): String = + csv match { + case CSV.Field(text) => + escapedAsNecessary( + text, + Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, + escape, + surround + ) + case CSV.Header(text) => + escapedAsNecessary( + text, + Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, + escape, + surround + ) + case CSV.Row(xs) => xs.map(print).intercalate(columnSeperator) + case CSV.Headers(xs) => xs.map(print).intercalate(columnSeperator) + case CSV.Rows(xs) => xs.map(print).intercalate(rowSeperator) + case CSV.Complete(headers, body) => print(headers) + rowSeperator + print(body) + } override val rowSeparator: String = rowSeperator From c22e836894279dd7d3c754cec4d6f18fb94dbde6 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:12:14 -0400 Subject: [PATCH 05/15] update ci - add scalafmt.yml - remove mergify.yml - update build.props version --- .github/workflows/ci.yml | 86 ++++++++++++++++------------------ .github/workflows/scalafmt.yml | 18 +++++++ .mergify.yml | 9 ---- project/build.properties | 2 +- 4 files changed, 59 insertions(+), 56 deletions(-) create mode 100644 .github/workflows/scalafmt.yml delete mode 100644 .mergify.yml diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3d85c06a..28e705f0 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -1,57 +1,51 @@ -name: CI +name: gormorant CI # If you update paths, make sure to update them in no-ci.yml as well on: - push: - branches: - - master - paths-ignore: - - .github/actions - - "README.md" - pull_request: - paths-ignore: - - .github/actions - - "README.md" + push: + paths-ignore: + - .github/actions + - "README.md" + pull_request: + paths-ignore: + - .github/actions + - "README.md" permissions: - contents: read + contents: read env: - REPOSITORY: "gcr.io/helical-crowbar-220917" - IMAGE_NAME: "gormorant" - PROJECT_ID: "helical-crowbar-220917" - PLATFORMS: "linux/amd64" - prometheus_java_agent_version: 0.12.0 - PR_NUMBER: ${{ github.event.pull_request.number || github.ref_name }} - COMMIT_SHA: "${{ github.event.pull_request.head.sha || github.sha }}" - NEXUS_USER: "jenkins" - NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" + REPOSITORY: "gcr.io/helical-crowbar-220917" + PROJECT_ID: "helical-crowbar-220917" + PLATFORMS: "linux/amd64" + PR_NUMBER: ${{ github.event.pull_request.number || github.ref_name }} + NEXUS_USER: "jenkins" + NEXUS_PASSWORD: "${{ secrets.NEXUS_PASSWORD }}" concurrency: - group: ${{ github.workflow_ref }} - cancel-in-progress: true + group: ${{ github.workflow_ref }} + cancel-in-progress: true jobs: - build-and-publish: - name: Build libraries and image - runs-on: ubuntu-latest - steps: - - name: Build w/ sbt - uses: garnercorp/build-actions/scala@main - with: - maven-username: ${{ env.NEXUS_USER }} - maven-password: ${{ env.NEXUS_PASSWORD }} - copy-prometheus-to: gormorant - jars-to-simplify: gormorant - prometheus-url: https://nexus.garnercorp.com/repository/raw/jmx_prometheus_javaagent-${{ env.prometheus_java_agent_version }}.jar - build-script: scripts/build-scala-ci.sh - build-script-args: ${{ env.PR_NUMBER }} - container-project: ${{ env.PROJECT_ID }} - google-credentials-json: ${{ secrets.GCR_JSON_KEY }} - google-cloud-sdk-version: ${{ vars.GCLOUD_SDK_VERSION }} + build: + name: Build Project + runs-on: ubuntu-latest-8cpu + steps: + - name: Build w/ sbt + if: github.ref_name != 'master' + uses: garnercorp/build-actions/scala@main + with: + maven-username: ${{ env.NEXUS_USER }} + maven-password: ${{ env.NEXUS_PASSWORD }} + build-script: scripts/build-scala-ci.sh + build-script-args: ${{ env.PR_NUMBER }} + extra-sbt-args: test - - name: Build and Publish Jobs image - uses: garnercorp/build-actions/image@main - with: - container-project: ${{ env.PROJECT_ID }} - platforms: ${{ env.PLATFORMS }} - working-directory: "gormorant" + - name: Publish + uses: garnercorp/build-actions/scala@main + if: github.ref_name == 'master' + with: + maven-username: ${{ env.NEXUS_USER }} + maven-password: ${{ env.NEXUS_PASSWORD }} + build-script: scripts/build-scala-ci.sh + build-script-args: ${{ env.PR_NUMBER }} + extra-sbt-args: publish diff --git a/.github/workflows/scalafmt.yml b/.github/workflows/scalafmt.yml new file mode 100644 index 00000000..6f8acc67 --- /dev/null +++ b/.github/workflows/scalafmt.yml @@ -0,0 +1,18 @@ +name: Check scalafmt on push +on: + push: + branches: + - "**" + tags-ignore: + - "**" +jobs: + scalafmt-lint: + name: Scalafmt + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Checking your scala code formatting + uses: GarnerCorp/scalafmt-ci@main + with: + args: "--test" + version: 3.8.1 diff --git a/.mergify.yml b/.mergify.yml deleted file mode 100644 index b0f5013f..00000000 --- a/.mergify.yml +++ /dev/null @@ -1,9 +0,0 @@ -pull_request_rules: - - name: automatically merge scala-steward's PRs - conditions: - - author=scala-steward - - status-success=Travis CI - Pull Request - - body~=labels:.*semver-patch - actions: - merge: - method: merge diff --git a/project/build.properties b/project/build.properties index 27430827..04267b14 100644 --- a/project/build.properties +++ b/project/build.properties @@ -1 +1 @@ -sbt.version=1.9.6 +sbt.version=1.9.9 From 5694b859e64ac7a8fd16db80cabb1461b48cb5f8 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:19:07 -0400 Subject: [PATCH 06/15] runs on latest without 8cpu --- .github/workflows/ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 28e705f0..3d746a89 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -28,7 +28,7 @@ concurrency: jobs: build: name: Build Project - runs-on: ubuntu-latest-8cpu + runs-on: ubuntu-latest steps: - name: Build w/ sbt if: github.ref_name != 'master' From 65e6b35c2125252e09f6420d9741e2eca7766d74 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:26:05 -0400 Subject: [PATCH 07/15] run scalafmt --- .../io/chrisdavenport/cormorant/CSV.scala | 4 +- .../io/chrisdavenport/cormorant/Cursor.scala | 29 ++- .../chrisdavenport/cormorant/Decoding.scala | 11 +- .../chrisdavenport/cormorant/Encoding.scala | 13 +- .../io/chrisdavenport/cormorant/Error.scala | 8 +- .../io/chrisdavenport/cormorant/Get.scala | 8 +- .../cormorant/LabelledRead.scala | 5 +- .../io/chrisdavenport/cormorant/Printer.scala | 21 +- .../io/chrisdavenport/cormorant/Read.scala | 19 +- .../cormorant/instances/base.scala | 80 +++++--- .../cormorant/instances/time.scala | 88 ++++++--- .../chrisdavenport/cormorant/syntax/put.scala | 14 +- .../cormorant/syntax/read.scala | 3 +- .../cormorant/fs2/package.scala | 180 ++++++++++-------- .../cormorant/generic/auto.scala | 40 ++-- .../cormorant/generic/internal/read.scala | 104 +++++----- .../generic/internal/readlabelled.scala | 83 ++++---- .../cormorant/generic/internal/write.scala | 26 +-- .../generic/internal/writelabelled.scala | 73 ++++--- .../cormorant/generic/semiauto.scala | 45 +++-- .../cormorant/parser/CSVLikeParser.scala | 124 ++++++------ .../cormorant/parser/package.scala | 58 ++++-- .../cormorant/refined/package.scala | 14 +- 23 files changed, 599 insertions(+), 451 deletions(-) diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/CSV.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/CSV.scala index d035308c..dca81dc5 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/CSV.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/CSV.scala @@ -5,13 +5,13 @@ import cats.data._ sealed trait CSV object CSV { final case class Complete(headers: Headers, rows: Rows) extends CSV { - def stripTrailingRow: Complete = + def stripTrailingRow: Complete = this.copy(rows = rows.stripTrailingRow) } final case class Rows(rows: List[Row]) extends CSV { def stripTrailingRow: Rows = { val initial: List[Row] = rows match { - case Nil => Nil + case Nil => Nil case other => other.init } Rows(initial) diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Cursor.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Cursor.scala index 31b050c2..52292ba2 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Cursor.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Cursor.scala @@ -11,32 +11,43 @@ object Cursor { def atHeader(header: CSV.Header)( headers: CSV.Headers, - row: CSV.Row): Either[Error.DecodeFailure, CSV.Field] = { + row: CSV.Row + ): Either[Error.DecodeFailure, CSV.Field] = { optionIndexOf(headers.l.toList)(header) .fold[Either[Error.DecodeFailure, Int]]( - Either.left(Error.DecodeFailure.single( - s"Header $header not present in header: $headers for row: $row")) + Either.left( + Error.DecodeFailure.single( + s"Header $header not present in header: $headers for row: $row" + ) + ) )(Either.right) .flatMap(i => atIndex(row, i)) } - def atIndex(row: CSV.Row, index: Int): Either[Error.DecodeFailure, CSV.Field] = { - row.l - .toList + def atIndex( + row: CSV.Row, + index: Int + ): Either[Error.DecodeFailure, CSV.Field] = { + row.l.toList .drop(index) .headOption .fold( Either.left[Error.DecodeFailure, CSV.Field]( - Error.DecodeFailure.single(s"Index $index not present in row: $row ")) + Error.DecodeFailure.single(s"Index $index not present in row: $row ") + ) )(Either.right[Error.DecodeFailure, CSV.Field]) } def decodeAtHeader[A: Get]( - header: CSV.Header)(headers: CSV.Headers, row: CSV.Row): Either[Error.DecodeFailure, A] = + header: CSV.Header + )(headers: CSV.Headers, row: CSV.Row): Either[Error.DecodeFailure, A] = atHeader(header)(headers, row) .flatMap(Get[A].get(_)) - def decodeAtIndex[A: Get](row: CSV.Row, index: Int): Either[Error.DecodeFailure, A] = + def decodeAtIndex[A: Get]( + row: CSV.Row, + index: Int + ): Either[Error.DecodeFailure, A] = atIndex(row, index) .flatMap(Get[A].get(_)) } diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Decoding.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Decoding.scala index b6b20aa1..65a4250c 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Decoding.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Decoding.scala @@ -2,13 +2,18 @@ package io.chrisdavenport.cormorant object Decoding { - def readRow[A: Read](row: CSV.Row): Either[Error.DecodeFailure, A] = Read[A].read(row) + def readRow[A: Read](row: CSV.Row): Either[Error.DecodeFailure, A] = + Read[A].read(row) def readRows[A: Read](rows: CSV.Rows): List[Either[Error.DecodeFailure, A]] = rows.rows.map(Read[A].read) - def readComplete[A: Read](complete: CSV.Complete): List[Either[Error.DecodeFailure, A]] = + def readComplete[A: Read]( + complete: CSV.Complete + ): List[Either[Error.DecodeFailure, A]] = readRows(complete.rows) - def readLabelled[A: LabelledRead](complete: CSV.Complete): List[Either[Error.DecodeFailure, A]] = + def readLabelled[A: LabelledRead]( + complete: CSV.Complete + ): List[Either[Error.DecodeFailure, A]] = complete.rows.rows.map(LabelledRead[A].read(_, complete.headers)) } diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Encoding.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Encoding.scala index 6f89fe1d..63bfabe6 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Encoding.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Encoding.scala @@ -1,12 +1,19 @@ package io.chrisdavenport.cormorant object Encoding { - def writeWithHeaders[A: Write](xs: List[A], headers: CSV.Headers): CSV.Complete = + def writeWithHeaders[A: Write]( + xs: List[A], + headers: CSV.Headers + ): CSV.Complete = CSV.Complete(headers, writeRows(xs)) def writeRow[A: Write](a: A): CSV.Row = Write[A].write(a) - def writeRows[A: Write](xs: List[A]): CSV.Rows = CSV.Rows(xs.map(Write[A].write)) + def writeRows[A: Write](xs: List[A]): CSV.Rows = + CSV.Rows(xs.map(Write[A].write)) def writeComplete[A: LabelledWrite](xs: List[A]): CSV.Complete = - CSV.Complete(LabelledWrite[A].headers, CSV.Rows(xs.map(LabelledWrite[A].write))) + CSV.Complete( + LabelledWrite[A].headers, + CSV.Rows(xs.map(LabelledWrite[A].write)) + ) } diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Error.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Error.scala index a5ea2b97..34070292 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Error.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Error.scala @@ -10,8 +10,8 @@ sealed trait Error extends Exception { final override def getMessage: String = toString override def toString: String = this match { case Error.DecodeFailure(failure) => s"DecodeFailure($failure)" - case Error.ParseFailure(reason) => s"ParseFailure($reason)" - case Error.PrintFailure(reason) => s"PrintFailure($reason)" + case Error.ParseFailure(reason) => s"ParseFailure($reason)" + case Error.PrintFailure(reason) => s"PrintFailure($reason)" } } object Error { @@ -23,7 +23,9 @@ object Error { final case class DecodeFailure(failure: NonEmptyList[String]) extends Error object DecodeFailure { - def single(reason: String): DecodeFailure = DecodeFailure(NonEmptyList.of(reason)) + def single(reason: String): DecodeFailure = DecodeFailure( + NonEmptyList.of(reason) + ) implicit val decodeFailureSemigroup: Semigroup[DecodeFailure] = { new Semigroup[DecodeFailure] { def combine(x: DecodeFailure, y: DecodeFailure): DecodeFailure = diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Get.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Get.scala index 90c11af4..3ac90b9d 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Get.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Get.scala @@ -17,12 +17,16 @@ object Get { Either.right(f(field)) } - def tryOrMessage[A](f: CSV.Field => Try[A], failedMessage: CSV.Field => String): Get[A] = + def tryOrMessage[A]( + f: CSV.Field => Try[A], + failedMessage: CSV.Field => String + ): Get[A] = new Get[A] { def get(field: CSV.Field): Either[Error.DecodeFailure, A] = f(field).toOption .fold[Either[Error.DecodeFailure, A]]( - Either.left(Error.DecodeFailure.single(failedMessage(field))))(x => Either.right(x)) + Either.left(Error.DecodeFailure.single(failedMessage(field))) + )(x => Either.right(x)) } implicit val getFunctor: Functor[Get] = new Functor[Get] { diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/LabelledRead.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/LabelledRead.scala index 826edc04..57c2f39b 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/LabelledRead.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/LabelledRead.scala @@ -6,9 +6,8 @@ trait LabelledRead[A] { object LabelledRead { def apply[A](implicit ev: LabelledRead[A]): LabelledRead[A] = ev - /** - * Labelled Read Which Ignores Headers and Reads Based on the Supplied Read - **/ + /** Labelled Read Which Ignores Headers and Reads Based on the Supplied Read + */ def fromRead[A: Read]: LabelledRead[A] = new LabelledRead[A] { def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, A] = Read[A].read(a) diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala index c38b4811..7ea45025 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Printer.scala @@ -37,21 +37,32 @@ object Printer { case CSV.Field(text) => escapedAsNecessary( text, - Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, + Set( + columnSeperator, + rowSeperator, + escape, + surround + ) ++ additionalEscapes, escape, surround ) case CSV.Header(text) => escapedAsNecessary( text, - Set(columnSeperator, rowSeperator, escape, surround) ++ additionalEscapes, + Set( + columnSeperator, + rowSeperator, + escape, + surround + ) ++ additionalEscapes, escape, surround ) - case CSV.Row(xs) => xs.map(print).intercalate(columnSeperator) + case CSV.Row(xs) => xs.map(print).intercalate(columnSeperator) case CSV.Headers(xs) => xs.map(print).intercalate(columnSeperator) - case CSV.Rows(xs) => xs.map(print).intercalate(rowSeperator) - case CSV.Complete(headers, body) => print(headers) + rowSeperator + print(body) + case CSV.Rows(xs) => xs.map(print).intercalate(rowSeperator) + case CSV.Complete(headers, body) => + print(headers) + rowSeperator + print(body) } override val rowSeparator: String = rowSeperator diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Read.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Read.scala index f2e2fdc2..32ebb6f1 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/Read.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/Read.scala @@ -2,20 +2,25 @@ package io.chrisdavenport.cormorant import cats.syntax.all._ trait Read[A] { - def read(a: CSV.Row): Either[Error.DecodeFailure, A] = + def read(a: CSV.Row): Either[Error.DecodeFailure, A] = readPartial(a).map(_.fold(_._2, identity)) // It either fails, returns a partial row that is left and an outcome, // or the final outcome if it consumed all input of the row. - def readPartial(a: CSV.Row): Either[Error.DecodeFailure, Either[(CSV.Row, A), A]] + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, A), A]] } object Read { def apply[A](implicit ev: Read[A]): Read[A] = ev - def fromHeaders[A](f: (CSV.Headers, CSV.Row) => Either[Error.DecodeFailure, A])( - headers: CSV.Headers): Read[A] = new Read[A] { - def readPartial(a: CSV.Row): Either[Error.DecodeFailure,Either[(CSV.Row, A), A]] = - f(headers,a).map(Either.right) - } + def fromHeaders[A]( + f: (CSV.Headers, CSV.Row) => Either[Error.DecodeFailure, A] + )(headers: CSV.Headers): Read[A] = new Read[A] { + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, A), A]] = + f(headers, a).map(Either.right) + } } diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/base.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/base.scala index 57a21b2e..dc380e29 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/base.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/base.scala @@ -14,7 +14,12 @@ trait base { implicit val unitGet: Get[Unit] = new Get[Unit] { def get(csv: CSV.Field): Either[Error.DecodeFailure, Unit] = if (csv.x == "") Right(()) - else Left(Error.DecodeFailure.single("Failed to decode Unit: Received Field $field")) + else + Left( + Error.DecodeFailure.single( + "Failed to decode Unit: Received Field $field" + ) + ) } implicit val unitPut: Put[Unit] = stringPut.contramap(_ => "") @@ -24,18 +29,27 @@ trait base { ) implicit val boolPut: Put[Boolean] = stringPut.contramap(_.toString) - implicit val javaBoolGet: Get[lang.Boolean] = boolGet.map(java.lang.Boolean.valueOf) - implicit val javaBoolPut: Put[java.lang.Boolean] = boolPut.contramap(_.booleanValue()) + implicit val javaBoolGet: Get[lang.Boolean] = + boolGet.map(java.lang.Boolean.valueOf) + implicit val javaBoolPut: Put[java.lang.Boolean] = + boolPut.contramap(_.booleanValue()) implicit val charGet: Get[Char] = new Get[Char] { def get(csv: CSV.Field): Either[Error.DecodeFailure, Char] = if (csv.x.size == 1) Right(csv.x.charAt(0)) - else Left(Error.DecodeFailure.single("Failed to decode Char: Received Field $field")) + else + Left( + Error.DecodeFailure.single( + "Failed to decode Char: Received Field $field" + ) + ) } implicit val charPut: Put[Char] = stringPut.contramap(_.toString) - implicit val javaCharGet: Get[java.lang.Character] = charGet.map(java.lang.Character.valueOf) - implicit val javaCharPut: Put[java.lang.Character] = charPut.contramap(_.charValue()) + implicit val javaCharGet: Get[java.lang.Character] = + charGet.map(java.lang.Character.valueOf) + implicit val javaCharPut: Put[java.lang.Character] = + charPut.contramap(_.charValue()) implicit val floatGet: Get[Float] = Get.tryOrMessage( field => Try(field.x.toDouble.toFloat), @@ -43,8 +57,10 @@ trait base { ) implicit val floatPut: Put[Float] = stringPut.contramap(_.toString) - implicit val javaFloatGet: Get[java.lang.Float] = floatGet.map(java.lang.Float.valueOf) - implicit val javaFolatPut: Put[java.lang.Float] = floatPut.contramap(_.floatValue()) + implicit val javaFloatGet: Get[java.lang.Float] = + floatGet.map(java.lang.Float.valueOf) + implicit val javaFolatPut: Put[java.lang.Float] = + floatPut.contramap(_.floatValue()) implicit val doubleGet: Get[Double] = Get.tryOrMessage[Double]( field => Try(field.x.toDouble), @@ -52,8 +68,10 @@ trait base { ) implicit val doublePut: Put[Double] = stringPut.contramap(_.toString) - implicit val javaDoubleGet: Get[java.lang.Double] = doubleGet.map(java.lang.Double.valueOf) - implicit val javaDoublePut: Put[java.lang.Double] = doublePut.contramap(_.doubleValue()) + implicit val javaDoubleGet: Get[java.lang.Double] = + doubleGet.map(java.lang.Double.valueOf) + implicit val javaDoublePut: Put[java.lang.Double] = + doublePut.contramap(_.doubleValue()) implicit val intGet: Get[Int] = Get.tryOrMessage[Int]( field => Try(field.x.toInt), @@ -67,8 +85,10 @@ trait base { ) implicit val bytePut: Put[Byte] = intPut.contramap(_.toInt) - implicit val javaByteGet: Get[java.lang.Byte] = byteGet.map(java.lang.Byte.valueOf) - implicit val javaBytePut: Put[java.lang.Byte] = bytePut.contramap(_.byteValue()) + implicit val javaByteGet: Get[java.lang.Byte] = + byteGet.map(java.lang.Byte.valueOf) + implicit val javaBytePut: Put[java.lang.Byte] = + bytePut.contramap(_.byteValue()) implicit val shortGet: Get[Short] = Get.tryOrMessage[Short]( field => Try(field.x.toShort), @@ -76,11 +96,15 @@ trait base { ) implicit val shortPut: Put[Short] = stringPut.contramap(_.toString) - implicit val javaShortGet: Get[java.lang.Short] = shortGet.map(java.lang.Short.valueOf) - implicit val javaShortPut: Put[java.lang.Short] = shortPut.contramap(_.shortValue()) + implicit val javaShortGet: Get[java.lang.Short] = + shortGet.map(java.lang.Short.valueOf) + implicit val javaShortPut: Put[java.lang.Short] = + shortPut.contramap(_.shortValue()) - implicit val javaIntegerGet: Get[java.lang.Integer] = intGet.map(java.lang.Integer.valueOf) - implicit val javaIntegerPut: Put[java.lang.Integer] = intPut.contramap(_.intValue()) + implicit val javaIntegerGet: Get[java.lang.Integer] = + intGet.map(java.lang.Integer.valueOf) + implicit val javaIntegerPut: Put[java.lang.Integer] = + intPut.contramap(_.intValue()) implicit val longGet: Get[Long] = Get.tryOrMessage( field => Try(field.x.toLong), @@ -88,8 +112,10 @@ trait base { ) implicit val longPut: Put[Long] = stringPut.contramap(_.toString) - implicit val javaLongGet: Get[java.lang.Long] = longGet.map(java.lang.Long.valueOf) - implicit val javaLongPut: Put[java.lang.Long] = longPut.contramap(_.longValue()) + implicit val javaLongGet: Get[java.lang.Long] = + longGet.map(java.lang.Long.valueOf) + implicit val javaLongPut: Put[java.lang.Long] = + longPut.contramap(_.longValue()) implicit val bigIntGet: Get[BigInt] = Get.tryOrMessage( field => Try(BigInt(field.x)), @@ -99,7 +125,8 @@ trait base { implicit val javaBigIntegerGet: Get[java.math.BigInteger] = bigIntGet.map(_.bigInteger) - implicit val javaBigIntegerPut: Put[java.math.BigInteger] = bigIntPut.contramap(BigInt.apply) + implicit val javaBigIntegerPut: Put[java.math.BigInteger] = + bigIntPut.contramap(BigInt.apply) implicit val bigDecimalGet: Get[BigDecimal] = Get.tryOrMessage[BigDecimal]( field => Try(BigDecimal(field.x)), @@ -107,7 +134,8 @@ trait base { ) implicit val bigDecimalPut: Put[BigDecimal] = stringPut.contramap(_.toString) - implicit val javaBigDecimalGet: Get[java.math.BigDecimal] = bigDecimalGet.map(_.bigDecimal) + implicit val javaBigDecimalGet: Get[java.math.BigDecimal] = + bigDecimalGet.map(_.bigDecimal) implicit val javaBigDecimalPut: Put[java.math.BigDecimal] = bigDecimalPut.contramap(BigDecimal.apply) @@ -128,15 +156,14 @@ trait base { def put(a: Option[A]): CSV.Field = a.fold(CSV.Field(""))(a => P.put(a)) } - /** - * Get for Either, favors the Right get if successful - */ + /** Get for Either, favors the Right get if successful + */ implicit def eitherGet[A: Get, B: Get]: Get[Either[A, B]] = new Get[Either[A, B]] { def get(field: CSV.Field): Either[Error.DecodeFailure, Either[A, B]] = (Get[A].get(field), Get[B].get(field)) match { - case (_, Right(b)) => Either.right(Either.right(b)) - case (Right(a), _) => Either.right(Either.left(a)) + case (_, Right(b)) => Either.right(Either.right(b)) + case (Right(a), _) => Either.right(Either.left(a)) case (Left(e1), Left(e2)) => Either.left(e1 |+| e2) } } @@ -150,7 +177,8 @@ trait base { field => Try(e.withName(field.x)), field => s"Failed to decode Enumeration $e: Received Field $field" ) - final def enumerationPut[E <: Enumeration]: Put[E#Value] = stringPut.contramap(_.toString) + final def enumerationPut[E <: Enumeration]: Put[E#Value] = + stringPut.contramap(_.toString) } diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/time.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/time.scala index eda2fb56..5c7d3563 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/time.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/instances/time.scala @@ -31,7 +31,8 @@ trait time { field => Try(Instant.parse(field.x)), field => s"Failed to decode Instant: Received Field $field" ) - implicit final val instantPut: Put[Instant] = base.stringPut.contramap(_.toString) + implicit final val instantPut: Put[Instant] = + base.stringPut.contramap(_.toString) implicit final val zoneIdGet: Get[ZoneId] = Get.tryOrMessage( field => Try(ZoneId.of(field.x)), @@ -48,12 +49,14 @@ trait time { final def putLocalDateTime(formatter: DateTimeFormatter): Put[LocalDateTime] = base.stringPut.contramap(_.format(formatter)) - implicit final val localDateTimeGetDefault: Get[LocalDateTime] = getLocalDateTime( - ISO_LOCAL_DATE_TIME - ) - implicit final val localDateTimePutDefault: Put[LocalDateTime] = putLocalDateTime( - ISO_LOCAL_DATE_TIME - ) + implicit final val localDateTimeGetDefault: Get[LocalDateTime] = + getLocalDateTime( + ISO_LOCAL_DATE_TIME + ) + implicit final val localDateTimePutDefault: Put[LocalDateTime] = + putLocalDateTime( + ISO_LOCAL_DATE_TIME + ) final def getZonedDateTime(formatter: DateTimeFormatter): Get[ZonedDateTime] = Get.tryOrMessage( @@ -63,27 +66,35 @@ trait time { final def putZonedDateTime(formatter: DateTimeFormatter): Put[ZonedDateTime] = base.stringPut.contramap(_.format(formatter)) - implicit final val zonedDateTimeGetDefault: Get[ZonedDateTime] = getZonedDateTime( - ISO_ZONED_DATE_TIME - ) - implicit final val zonedDateTimePutDefault: Put[ZonedDateTime] = putZonedDateTime( - ISO_ZONED_DATE_TIME - ) + implicit final val zonedDateTimeGetDefault: Get[ZonedDateTime] = + getZonedDateTime( + ISO_ZONED_DATE_TIME + ) + implicit final val zonedDateTimePutDefault: Put[ZonedDateTime] = + putZonedDateTime( + ISO_ZONED_DATE_TIME + ) - final def getOffsetDateTime(formatter: DateTimeFormatter): Get[OffsetDateTime] = + final def getOffsetDateTime( + formatter: DateTimeFormatter + ): Get[OffsetDateTime] = Get.tryOrMessage( field => Try(OffsetDateTime.parse(field.x, formatter)), field => s"Failed to decode OffsetDateTime: Received Field $field" ) - final def putOffsetDateTime(formatter: DateTimeFormatter): Put[OffsetDateTime] = + final def putOffsetDateTime( + formatter: DateTimeFormatter + ): Put[OffsetDateTime] = base.stringPut.contramap(_.format(formatter)) - implicit final val offsetDateTimeGetDefault: Get[OffsetDateTime] = getOffsetDateTime( - ISO_OFFSET_DATE_TIME - ) - implicit final val offsetDateTimePutDefault: Put[OffsetDateTime] = putOffsetDateTime( - ISO_OFFSET_DATE_TIME - ) + implicit final val offsetDateTimeGetDefault: Get[OffsetDateTime] = + getOffsetDateTime( + ISO_OFFSET_DATE_TIME + ) + implicit final val offsetDateTimePutDefault: Put[OffsetDateTime] = + putOffsetDateTime( + ISO_OFFSET_DATE_TIME + ) final def getLocalDate(formatter: DateTimeFormatter): Get[LocalDate] = Get.tryOrMessage( @@ -93,8 +104,12 @@ trait time { final def putLocalDate(formatter: DateTimeFormatter): Put[LocalDate] = base.stringPut.contramap(_.format(formatter)) - implicit final val localDateGetDefault: Get[LocalDate] = getLocalDate(ISO_LOCAL_DATE) - implicit final val localDatePutDefault: Put[LocalDate] = putLocalDate(ISO_LOCAL_DATE) + implicit final val localDateGetDefault: Get[LocalDate] = getLocalDate( + ISO_LOCAL_DATE + ) + implicit final val localDatePutDefault: Put[LocalDate] = putLocalDate( + ISO_LOCAL_DATE + ) final def getLocalTime(formatter: DateTimeFormatter): Get[LocalTime] = Get.tryOrMessage( @@ -104,8 +119,12 @@ trait time { final def putLocalTime(formatter: DateTimeFormatter): Put[LocalTime] = base.stringPut.contramap(_.format(formatter)) - implicit final val localTimeGetDefault: Get[LocalTime] = getLocalTime(ISO_LOCAL_TIME) - implicit final val localTimePutDefault: Put[LocalTime] = putLocalTime(ISO_LOCAL_TIME) + implicit final val localTimeGetDefault: Get[LocalTime] = getLocalTime( + ISO_LOCAL_TIME + ) + implicit final val localTimePutDefault: Put[LocalTime] = putLocalTime( + ISO_LOCAL_TIME + ) final def getOffsetTime(formatter: DateTimeFormatter): Get[OffsetTime] = Get.tryOrMessage( @@ -115,8 +134,12 @@ trait time { final def putOffsetTime(formatter: DateTimeFormatter): Put[OffsetTime] = base.stringPut.contramap(_.format(formatter)) - implicit final val offsetTimeGetDefault: Get[OffsetTime] = getOffsetTime(ISO_OFFSET_TIME) - implicit final val offsetTimePutDefault: Put[OffsetTime] = putOffsetTime(ISO_OFFSET_TIME) + implicit final val offsetTimeGetDefault: Get[OffsetTime] = getOffsetTime( + ISO_OFFSET_TIME + ) + implicit final val offsetTimePutDefault: Put[OffsetTime] = putOffsetTime( + ISO_OFFSET_TIME + ) final def getYearMonth(formatter: DateTimeFormatter): Get[YearMonth] = Get.tryOrMessage( @@ -126,9 +149,14 @@ trait time { final def putYearMonth(formatter: DateTimeFormatter): Put[YearMonth] = base.stringPut.contramap(_.format(formatter)) - private final val yearMonthFormatter: DateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM") - implicit final val yearMonthGetDefault: Get[YearMonth] = getYearMonth(yearMonthFormatter) - implicit final val yearMonthPutDefault: Put[YearMonth] = putYearMonth(yearMonthFormatter) + private final val yearMonthFormatter: DateTimeFormatter = + DateTimeFormatter.ofPattern("yyyy-MM") + implicit final val yearMonthGetDefault: Get[YearMonth] = getYearMonth( + yearMonthFormatter + ) + implicit final val yearMonthPutDefault: Put[YearMonth] = putYearMonth( + yearMonthFormatter + ) implicit final val getPeriod: Get[Period] = Get.tryOrMessage( field => Try(Period.parse(field.x)), diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/put.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/put.scala index 2778fd19..8558286e 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/put.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/put.scala @@ -6,17 +6,13 @@ trait put { implicit class putOps[A](a: A) { - /** - * Facilitates the transformation of any `A` with a `Put` - * instance into a field + /** Facilitates the transformation of any `A` with a `Put` instance into a + * field * - * @example {{{ - * //Before - * Put[String].put("hello") + * @example + * {{{ //Before Put[String].put("hello") * - * //After - * "Hello".field - * }}} + * //After "Hello".field }}} */ def field(implicit P: Put[A]): CSV.Field = P.put(a) diff --git a/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/read.scala b/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/read.scala index 00d97616..a78276a9 100644 --- a/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/read.scala +++ b/modules/core/src/main/scala/io/chrisdavenport/cormorant/syntax/read.scala @@ -7,7 +7,8 @@ trait read { def readRow[A: Read]: Either[Error.DecodeFailure, A] = Decoding.readRow(csv) } implicit class readRows(csv: CSV.Rows) { - def readRows[A: Read]: List[Either[Error.DecodeFailure, A]] = Decoding.readRows(csv) + def readRows[A: Read]: List[Either[Error.DecodeFailure, A]] = + Decoding.readRows(csv) } implicit class readComplete(csv: CSV.Complete) { diff --git a/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala b/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala index d2fa1eef..53e0a6ee 100644 --- a/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala +++ b/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala @@ -6,50 +6,54 @@ import atto._ import Atto._ import io.chrisdavenport.cormorant.parser.CSVParser -/** - * I don't think this is good enough, I think we need a custom pull which emits - * spec CSV Rows individually - */ +/** I don't think this is good enough, I think we need a custom pull which emits + * spec CSV Rows individually + */ package object fs2 { - /** - * Removes any empty row - i.e. Row(NonEmptyList.of("")) - */ - def parseRowsSafe[F[_]]: Pipe[F, String, Either[Error.ParseFailure, CSV.Row]] = - _.through(parseN[F, CSV.Row](opt(CSVParser.PERMISSIVE_CRLF) ~> CSVParser.record)) + /** Removes any empty row - i.e. Row(NonEmptyList.of("")) + */ + def parseRowsSafe[F[_]] + : Pipe[F, String, Either[Error.ParseFailure, CSV.Row]] = + _.through( + parseN[F, CSV.Row](opt(CSVParser.PERMISSIVE_CRLF) ~> CSVParser.record) + ) .through(clearEmptyRows) .map(row => Either.right(row)) - /** - * Removes any empty row - i.e. Row(NonEmptyList.of("")) - */ + /** Removes any empty row - i.e. Row(NonEmptyList.of("")) + */ def parseRows[F[_]: RaiseThrowable]: Pipe[F, String, CSV.Row] = _.through(parseRowsSafe).rethrow def readRowsSafe[F[_], A: Read]: Pipe[F, String, Either[Error, A]] = _.through(parseRowsSafe).map(_.leftWiden.flatMap(Read[A].read)) - def readRows[F[_]: RaiseThrowable, A: Read]: Pipe[F, String, A] = _.through(readRowsSafe).rethrow + def readRows[F[_]: RaiseThrowable, A: Read]: Pipe[F, String, A] = + _.through(readRowsSafe).rethrow - /** - * Read the first line as the headers, the rest as rows. - * This is super general to allow for better combinators based on it - * Removes Empty Rows - */ + /** Read the first line as the headers, the rest as rows. This is super + * general to allow for better combinators based on it Removes Empty Rows + */ def parseCompleteSafe[F[_]]: Pipe[ F, String, - Either[Error.ParseFailure, (CSV.Headers, Either[Error.ParseFailure, CSV.Row])] + Either[ + Error.ParseFailure, + (CSV.Headers, Either[Error.ParseFailure, CSV.Row]) + ] ] = { // _.through(cleanLastStringCRLF[F]) _.through(parse1(opt(CSVParser.PERMISSIVE_CRLF) ~> CSVParser.header)) .map[Either[Error.ParseFailure, (CSV.Headers, Stream[F, String])]] { - case (ParseResult.Done(rest, a), s) => Either.right((a, Stream(rest) ++ s)) - case (e, _) => e.either.leftMap(Error.ParseFailure.apply).map(h => (h, Stream.empty)) + case (ParseResult.Done(rest, a), s) => + Either.right((a, Stream(rest) ++ s)) + case (e, _) => + e.either.leftMap(Error.ParseFailure.apply).map(h => (h, Stream.empty)) } .flatMap { - _.traverse { - case (h, s) => - s.through(parseN(opt(CSVParser.PERMISSIVE_CRLF) ~> CSVParser.record)).map { row => + _.traverse { case (h, s) => + s.through(parseN(opt(CSVParser.PERMISSIVE_CRLF) ~> CSVParser.record)) + .map { row => (h, Either.right(row)) } } @@ -63,12 +67,15 @@ package object fs2 { s => { def go( r: ParseResult[A] - )(s: Stream[F, String]): Pull[F, (ParseResult[A], Stream[F, String]), Unit] = { + )( + s: Stream[F, String] + ): Pull[F, (ParseResult[A], Stream[F, String]), Unit] = { r match { case p @ ParseResult.Partial(_) => s.pull.uncons.flatMap { // Add String To Result If Stream Has More Values - case Some((c, rest)) => go(p.feed(Stream.chunk(c).compile.string))(rest) + case Some((c, rest)) => + go(p.feed(Stream.chunk(c).compile.string))(rest) // Reached Stream Termination and Still Partial - Return the partial // If we do not call done here, if this can still accept input it will // be a partial rather than a done. @@ -82,11 +89,14 @@ package object fs2 { private def parseN[F[_], A](p: Parser[A]): Pipe[F, String, A] = s => { - def exhaust(r: ParseResult[A], acc: List[A]): (ParseResult[A], List[A]) = { + def exhaust( + r: ParseResult[A], + acc: List[A] + ): (ParseResult[A], List[A]) = { r match { case ParseResult.Done(in, a) if in === "" => (r, a :: acc) case ParseResult.Done(in, a) => exhaust(p.parse(in), a :: acc) - case _ => (r, acc) + case _ => (r, acc) } } @@ -95,9 +105,9 @@ package object fs2 { case Some((c, rest)) => val s = Stream.chunk(c).compile.string val (r0, acc) = r match { - case ParseResult.Done(in, a) => (p.parse(in + s), List(a)) + case ParseResult.Done(in, a) => (p.parse(in + s), List(a)) case ParseResult.Fail(_, _, _) => (r, Nil) - case ParseResult.Partial(_) => (r.feed(s), Nil) + case ParseResult.Partial(_) => (r.feed(s), Nil) } val (r1, as) = exhaust(r0, acc) Pull.output(Chunk.from(as.reverse)) >> go(r1)(rest) @@ -108,19 +118,21 @@ package object fs2 { } private val emptyRow = CSV.Row(cats.data.NonEmptyList(CSV.Field(""), Nil)) - private type T = Either[Error.ParseFailure, (CSV.Headers, Either[Error.ParseFailure, CSV.Row])] + private type T = Either[ + Error.ParseFailure, + (CSV.Headers, Either[Error.ParseFailure, CSV.Row]) + ] private def clearEmptyRowsE[F[_]]: Pipe[F, T, T] = s => { def removeEmptyPull(s: Stream[F, T]): Pull[F, T, Unit] = { s.pull.uncons1.flatMap { case Some((next, rest)) => next - .flatMap { - case (_, eRow) => - eRow.map { row => - if (row == emptyRow) removeEmptyPull(rest) - else Pull.output1(next) >> removeEmptyPull(rest) - } + .flatMap { case (_, eRow) => + eRow.map { row => + if (row == emptyRow) removeEmptyPull(rest) + else Pull.output1(next) >> removeEmptyPull(rest) + } } .getOrElse(Pull.output1(next)) case None => Pull.done @@ -143,10 +155,14 @@ package object fs2 { removeEmpty(s).stream } - def parseComplete[F[_]: RaiseThrowable]: Pipe[F, String, (CSV.Headers, CSV.Row)] = - _.through(parseCompleteSafe).rethrow.map { case (h, e) => e.map((h, _)) }.rethrow + def parseComplete[F[_]: RaiseThrowable] + : Pipe[F, String, (CSV.Headers, CSV.Row)] = + _.through(parseCompleteSafe).rethrow + .map { case (h, e) => e.map((h, _)) } + .rethrow - def readLabelledCompleteSafe[F[_], A: LabelledRead]: Pipe[F, String, Either[Error, A]] = + def readLabelledCompleteSafe[F[_], A: LabelledRead] + : Pipe[F, String, Either[Error, A]] = _.through(parseCompleteSafe).map { e => for { (h, eRow) <- e @@ -161,52 +177,54 @@ package object fs2 { def encodeRows[F[_]](p: Printer): Pipe[F, CSV.Row, String] = _.map(p.print).intersperse(p.rowSeparator) - /** - * Converts the current `Stream` to a `Stream[F, String]` by encoding its content - * using the provided `Printer`. - * - * This method requires a valid `Write[A]` implicit instance. - * - * @example {{{ - * Stream - * .emits(list) - * .through(writeRows(headers, Printer.default)) - * }}} - */ + /** Converts the current `Stream` to a `Stream[F, String]` by encoding its + * content using the provided `Printer`. + * + * This method requires a valid `Write[A]` implicit instance. + * + * @example + * {{{ Stream .emits(list) .through(writeRows(headers, Printer.default)) + * }}} + */ def writeRows[F[_], A: Write](p: Printer): Pipe[F, A, String] = s => s.map(Write[A].write) .through(encodeRows(p)) - def encodeWithHeaders[F[_]](headers: CSV.Headers, p: Printer): Pipe[F, CSV.Row, String] = - s => Stream(p.print(headers)).covary[F] ++ Stream(p.rowSeparator) ++ s.through(encodeRows(p)) - - /** - * Converts the current `Stream` to a `Stream[F, String]` by encoding its content - * using the provided `Printer` and prepending the provided headers. - * - * This method requires a valid `Write[A]` implicit instance. - * - * @example {{{ - * Stream - * .emits(list) - * .through(writeWithHeaders(headers, Printer.default)) - * }}} - */ - def writeWithHeaders[F[_], A: Write](headers: CSV.Headers, p: Printer): Pipe[F, A, String] = - s => Stream(p.print(headers)).covary[F] ++ Stream(p.rowSeparator) ++ s.through(writeRows(p)) - - /** - * Converts the current `Stream` to a `Stream[F, String]` by encoding its content - * using the provided `Printer` and prepending the headers extracted from a valid - * `LabelledWrite[A]` implicit instance. - * - * @example {{{ - * Stream - * .emits(list) - * .through(writeLabelled(Printer.default)) - * }}} - */ + def encodeWithHeaders[F[_]]( + headers: CSV.Headers, + p: Printer + ): Pipe[F, CSV.Row, String] = + s => + Stream(p.print(headers)).covary[F] ++ Stream(p.rowSeparator) ++ s.through( + encodeRows(p) + ) + + /** Converts the current `Stream` to a `Stream[F, String]` by encoding its + * content using the provided `Printer` and prepending the provided headers. + * + * This method requires a valid `Write[A]` implicit instance. + * + * @example + * {{{ Stream .emits(list) .through(writeWithHeaders(headers, + * Printer.default)) }}} + */ + def writeWithHeaders[F[_], A: Write]( + headers: CSV.Headers, + p: Printer + ): Pipe[F, A, String] = + s => + Stream(p.print(headers)).covary[F] ++ Stream(p.rowSeparator) ++ s.through( + writeRows(p) + ) + + /** Converts the current `Stream` to a `Stream[F, String]` by encoding its + * content using the provided `Printer` and prepending the headers extracted + * from a valid `LabelledWrite[A]` implicit instance. + * + * @example + * {{{ Stream .emits(list) .through(writeLabelled(Printer.default)) }}} + */ def writeLabelled[F[_], A: LabelledWrite](p: Printer): Pipe[F, A, String] = s => s.through(writeWithHeaders(LabelledWrite[A].headers, p)(new Write[A] { diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/auto.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/auto.scala index 65f0b854..4eea0dab 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/auto.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/auto.scala @@ -3,32 +3,32 @@ package io.chrisdavenport.cormorant.generic import io.chrisdavenport.cormorant._ import shapeless._ -/** - * Fully Automatic Derivation of Any Product Type - **/ -object auto - extends internal.LabelledReadProofs - with internal.LabelledWriteProofs - with internal.ReadProofs - with internal.WriteProofs{ +/** Fully Automatic Derivation of Any Product Type + */ +object auto + extends internal.LabelledReadProofs + with internal.LabelledWriteProofs + with internal.ReadProofs + with internal.WriteProofs { - implicit def deriveWrite[A, R]( - implicit gen: Generic.Aux[A, R], - enc: Write[R] + implicit def deriveWrite[A, R](implicit + gen: Generic.Aux[A, R], + enc: Write[R] ): Write[A] = semiauto.deriveWrite[A, R] - implicit def deriveLabelledWrite[A, H <: HList]( - implicit gen: LabelledGeneric.Aux[A, H], - hlw: Lazy[LabelledWrite[H]] + implicit def deriveLabelledWrite[A, H <: HList](implicit + gen: LabelledGeneric.Aux[A, H], + hlw: Lazy[LabelledWrite[H]] ): LabelledWrite[A] = semiauto.deriveLabelledWrite[A, H] - implicit def deriveRead[A, R]( - implicit gen: Generic.Aux[A, R], - R: Lazy[Read[R]] + implicit def deriveRead[A, R](implicit + gen: Generic.Aux[A, R], + R: Lazy[Read[R]] ): Read[A] = semiauto.deriveRead[A, R] - implicit def deriveLabelledRead[A, H <: HList]( - implicit gen: LabelledGeneric.Aux[A, H], - hlw: Lazy[LabelledRead[H]]): LabelledRead[A] = semiauto.deriveLabelledRead + implicit def deriveLabelledRead[A, H <: HList](implicit + gen: LabelledGeneric.Aux[A, H], + hlw: Lazy[LabelledRead[H]] + ): LabelledRead[A] = semiauto.deriveLabelledRead } diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/read.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/read.scala index 764f3e0f..3dd96c72 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/read.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/read.scala @@ -7,56 +7,74 @@ import cats.data.NonEmptyList trait ReadProofs extends LowPriorityReadProofs { - implicit def readHNil[H](implicit G: Get[H]): Read[H :: HNil] = new Read[H :: HNil] { - def readPartial(a: CSV.Row): Either[Error.DecodeFailure, Either[(CSV.Row, H :: HNil), H :: HNil]] = a match { - case CSV.Row(NonEmptyList(f, Nil)) => - G.get(f).map(h => h :: HNil).map(Either.right) - case CSV.Row(NonEmptyList(f, rest)) => - NonEmptyList.fromList(rest) match { - case Some(nel) => - G.get(f).map(h => h :: HNil) - .map(h => Either.left((CSV.Row(nel), h))) - case None => - Either.left(Error.DecodeFailure.single(s"Unexpected Input: Did Not Expect - $a")) + implicit def readHNil[H](implicit G: Get[H]): Read[H :: HNil] = + new Read[H :: HNil] { + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, H :: HNil), H :: HNil]] = + a match { + case CSV.Row(NonEmptyList(f, Nil)) => + G.get(f).map(h => h :: HNil).map(Either.right) + case CSV.Row(NonEmptyList(f, rest)) => + NonEmptyList.fromList(rest) match { + case Some(nel) => + G.get(f) + .map(h => h :: HNil) + .map(h => Either.left((CSV.Row(nel), h))) + case None => + Either.left( + Error.DecodeFailure.single( + s"Unexpected Input: Did Not Expect - $a" + ) + ) + } } } - } - implicit def hlistRead[H, T <: HList]( - implicit G: Get[H], + implicit def hlistRead[H, T <: HList](implicit + G: Get[H], R: Lazy[Read[T]] ): Read[H :: T] = new Read[H :: T] { - def readPartial(a: CSV.Row): Either[Error.DecodeFailure, Either[(CSV.Row, H :: T), H :: T]] = a match { - case CSV.Row(NonEmptyList(h, t)) => - ( - G.get(h), - NonEmptyList.fromList(t) - .fold( - Either.left[Error.DecodeFailure, Either[(CSV.Row, T), T]](Error.DecodeFailure.single("Unexpected End Of Input")) - )(nel => - R.value.readPartial(CSV.Row(nel)) - ) - ).parMapN{ - case (h, Left((row, t))) => Either.left((row, h :: t)) - case (h, Right(t)) => Either.right(h :: t) - } - } + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, H :: T), H :: T]] = + a match { + case CSV.Row(NonEmptyList(h, t)) => + ( + G.get(h), + NonEmptyList + .fromList(t) + .fold( + Either.left[Error.DecodeFailure, Either[(CSV.Row, T), T]]( + Error.DecodeFailure.single("Unexpected End Of Input") + ) + )(nel => R.value.readPartial(CSV.Row(nel))) + ).parMapN { + case (h, Left((row, t))) => Either.left((row, h :: t)) + case (h, Right(t)) => Either.right(h :: t) + } + } } } -private[internal] trait LowPriorityReadProofs{ - implicit def hlistRead2[H, T <: HList]( - implicit RH: Lazy[Read[H]], - RT: Lazy[Read[T]] - ): Read[H :: T] = new Read[H :: T]{ - def readPartial(a: CSV.Row): Either[Error.DecodeFailure,Either[(CSV.Row, H :: T),H :: T]] = - RH.value.readPartial(a).flatMap{ - case Left((row, h)) => RT.value.readPartial(row).map{ - case Left((row, t)) => Left((row, h :: t)) - case Right(t) => Right(h:: t) - } - case Right(value) => - Either.left(Error.DecodeFailure.single(s"Incomplete Output - $value only")) +private[internal] trait LowPriorityReadProofs { + implicit def hlistRead2[H, T <: HList](implicit + RH: Lazy[Read[H]], + RT: Lazy[Read[T]] + ): Read[H :: T] = new Read[H :: T] { + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, H :: T), H :: T]] = + RH.value.readPartial(a).flatMap { + case Left((row, h)) => + RT.value.readPartial(row).map { + case Left((row, t)) => Left((row, h :: t)) + case Right(t) => Right(h :: t) + } + case Right(value) => + Either.left( + Error.DecodeFailure.single(s"Incomplete Output - $value only") + ) } } -} \ No newline at end of file +} diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/readlabelled.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/readlabelled.scala index c779f1dc..7d30a667 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/readlabelled.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/readlabelled.scala @@ -7,66 +7,75 @@ import cats.syntax.all._ trait LabelledReadProofs extends LowPriorityLabelledReadProofs { implicit val labelledReadHNil: LabelledRead[HNil] = new LabelledRead[HNil] { - def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure,HNil] = + def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, HNil] = Right(HNil) } } -private[internal] trait LowPriorityLabelledReadProofs - extends LowPriorityLabelledReadProofs1 { +private[internal] trait LowPriorityLabelledReadProofs + extends LowPriorityLabelledReadProofs1 { - implicit def deriveLabelledReadHList[K <: Symbol, H, T <: HList]( - implicit witness: Witness.Aux[K], + implicit def deriveLabelledReadHList[K <: Symbol, H, T <: HList](implicit + witness: Witness.Aux[K], P: Get[H], labelledRead: Lazy[LabelledRead[T]] - ): LabelledRead[FieldType[K, H] :: T] = new LabelledRead[FieldType[K, H] :: T] { - def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, FieldType[K, H] :: T] = { - val header = CSV.Header(witness.value.name) - ( - Cursor.decodeAtHeader[H](header)(h, a).map(field[K](_)), - labelledRead.value.read(a, h) - ) - .parMapN(_ :: _) + ): LabelledRead[FieldType[K, H] :: T] = + new LabelledRead[FieldType[K, H] :: T] { + def read( + a: CSV.Row, + h: CSV.Headers + ): Either[Error.DecodeFailure, FieldType[K, H] :: T] = { + val header = CSV.Header(witness.value.name) + ( + Cursor.decodeAtHeader[H](header)(h, a).map(field[K](_)), + labelledRead.value.read(a, h) + ) + .parMapN(_ :: _) + } } - } } private[internal] trait LowPriorityLabelledReadProofs1 - extends LowPriorityLabelledReadProofs2 { - implicit def deriveLabelledRead2[H, T <: HList]( - implicit - P: LabelledRead[H], - labelledRead: Lazy[LabelledRead[T]] + extends LowPriorityLabelledReadProofs2 { + implicit def deriveLabelledRead2[H, T <: HList](implicit + P: LabelledRead[H], + labelledRead: Lazy[LabelledRead[T]] ): LabelledRead[H :: T] = new LabelledRead[H :: T] { - def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, H :: T] = { + def read( + a: CSV.Row, + h: CSV.Headers + ): Either[Error.DecodeFailure, H :: T] = { ( P.read(a, h), labelledRead.value.read(a, h) ) - .parMapN{ - case (h, t) => h :: t + .parMapN { case (h, t) => + h :: t } } } } private[internal] trait LowPriorityLabelledReadProofs2 { - implicit def deriveLabelledRead3[K <: Symbol, H, T <: HList]( - implicit - P: LabelledRead[H], - labelledRead: Lazy[LabelledRead[T]] - ): LabelledRead[FieldType[K, H] :: T] = new LabelledRead[FieldType[K, H] :: T] { + implicit def deriveLabelledRead3[K <: Symbol, H, T <: HList](implicit + P: LabelledRead[H], + labelledRead: Lazy[LabelledRead[T]] + ): LabelledRead[FieldType[K, H] :: T] = + new LabelledRead[FieldType[K, H] :: T] { - def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, FieldType[K, H] :: T] = { - ( - P.read(a, h), - labelledRead.value.read(a, h) - ) - .parMapN{ - case (h, t) => field[K](h) :: t - } + def read( + a: CSV.Row, + h: CSV.Headers + ): Either[Error.DecodeFailure, FieldType[K, H] :: T] = { + ( + P.read(a, h), + labelledRead.value.read(a, h) + ) + .parMapN { case (h, t) => + field[K](h) :: t + } + } } - } -} \ No newline at end of file +} diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/write.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/write.scala index ffdd3677..8256d164 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/write.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/write.scala @@ -5,14 +5,14 @@ import shapeless._ import cats.data._ trait WriteProofs extends LowPriorityWriteProofs { - implicit def hnilWrite[H](implicit P: Put[H]): Write[H :: HNil] = new Write[H :: HNil] { - def write(a: H :: HNil): CSV.Row = CSV.Row(NonEmptyList.one(P.put(a.head))) - } - - + implicit def hnilWrite[H](implicit P: Put[H]): Write[H :: HNil] = + new Write[H :: HNil] { + def write(a: H :: HNil): CSV.Row = + CSV.Row(NonEmptyList.one(P.put(a.head))) + } - implicit def hlistWrite[H, T <: HList]( - implicit P: Put[H], + implicit def hlistWrite[H, T <: HList](implicit + P: Put[H], W: Write[T] ): Write[H :: T] = new Write[H :: T] { def write(a: H :: T): CSV.Row = { @@ -22,11 +22,11 @@ trait WriteProofs extends LowPriorityWriteProofs { } private[internal] trait LowPriorityWriteProofs { - implicit def hlistWriteW[H, T <: HList]( - implicit WH: Write[H], - W: Write[T] - ): Write[H :: T] = new Write[H :: T]{ - def write(a: H :: T): CSV.Row = + implicit def hlistWriteW[H, T <: HList](implicit + WH: Write[H], + W: Write[T] + ): Write[H :: T] = new Write[H :: T] { + def write(a: H :: T): CSV.Row = CSV.Row(WH.write(a.head).l.concatNel(W.write(a.tail).l)) } -} \ No newline at end of file +} diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/writelabelled.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/writelabelled.scala index fb37f4e1..3bb6e213 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/writelabelled.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/internal/writelabelled.scala @@ -6,40 +6,34 @@ import shapeless.labelled._ import cats.syntax.all._ import cats.data.NonEmptyList -trait LabelledWriteProofs - extends LowPriorityLabelledWriteProofs { +trait LabelledWriteProofs extends LowPriorityLabelledWriteProofs { - /** - * Base of Logical Induction for Put Systems. - * - * This proves Given a Put Before an HNil, so a single value - * within a product type. That we can serialize this field by name - * and value into a CSV - * - **/ - implicit def labelledWriteHNilPut[K <: Symbol, H]( - implicit witness: Witness.Aux[K], - P: Put[H] - ): LabelledWrite[FieldType[K, H] :: HNil] = + /** Base of Logical Induction for Put Systems. + * + * This proves Given a Put Before an HNil, so a single value within a product + * type. That we can serialize this field by name and value into a CSV + */ + implicit def labelledWriteHNilPut[K <: Symbol, H](implicit + witness: Witness.Aux[K], + P: Put[H] + ): LabelledWrite[FieldType[K, H] :: HNil] = new LabelledWrite[FieldType[K, H] :: HNil] { def headers: CSV.Headers = CSV.Headers(NonEmptyList.one(CSV.Header(witness.value.name))) - def write(a: FieldType[K, H] :: HNil): CSV.Row = + def write(a: FieldType[K, H] :: HNil): CSV.Row = CSV.Row(NonEmptyList.one(P.put(a.head))) } } private[internal] trait LowPriorityLabelledWriteProofs - extends LowPriorityLabelledWriteProofs1 { - /** - * This is the logical extension of the above base induction - * case Given som Field type with a name we serialize that field - * as - * - **/ - implicit def deriveByNameHListPut[K <: Symbol, H, T <: HList]( - implicit witness: Witness.Aux[K], + extends LowPriorityLabelledWriteProofs1 { + + /** This is the logical extension of the above base induction case Given som + * Field type with a name we serialize that field as + */ + implicit def deriveByNameHListPut[K <: Symbol, H, T <: HList](implicit + witness: Witness.Aux[K], P: Put[H], labelledWrite: Lazy[LabelledWrite[T]] ): LabelledWrite[FieldType[K, H] :: T] = @@ -47,7 +41,7 @@ private[internal] trait LowPriorityLabelledWriteProofs def headers: CSV.Headers = { CSV.Headers( NonEmptyList.one(CSV.Header(witness.value.name)) <+> - labelledWrite.value.headers.l + labelledWrite.value.headers.l ) } def write(a: FieldType[K, H] :: T): CSV.Row = @@ -56,36 +50,35 @@ private[internal] trait LowPriorityLabelledWriteProofs } private[internal] trait LowPriorityLabelledWriteProofs1 - extends LowPriorityLabelledWriteProofs2 { - implicit def deriveByLabelledWrite2[K, H, T <: HList]( - implicit P: LabelledWrite[H], - labelledWrite: Lazy[LabelledWrite[T]] + extends LowPriorityLabelledWriteProofs2 { + implicit def deriveByLabelledWrite2[K, H, T <: HList](implicit + P: LabelledWrite[H], + labelledWrite: Lazy[LabelledWrite[T]] ): LabelledWrite[FieldType[K, H] :: T] = - new LabelledWrite[FieldType[K, H]:: T] { + new LabelledWrite[FieldType[K, H] :: T] { def headers: CSV.Headers = { CSV.Headers( P.headers.l <+> - labelledWrite.value.headers.l + labelledWrite.value.headers.l ) } def write(a: FieldType[K, H] :: T): CSV.Row = { - CSV.Row(P.write(a.head).l.concatNel(labelledWrite.value.write(a.tail).l)) + CSV.Row( + P.write(a.head).l.concatNel(labelledWrite.value.write(a.tail).l) + ) } } } private[internal] trait LowPriorityLabelledWriteProofs2 { - implicit def labelledWriteHNilGet[K, H]( - implicit W: LabelledWrite[H] - ): LabelledWrite[FieldType[K, H] :: HNil] = + implicit def labelledWriteHNilGet[K, H](implicit + W: LabelledWrite[H] + ): LabelledWrite[FieldType[K, H] :: HNil] = new LabelledWrite[FieldType[K, H] :: HNil] { def headers: CSV.Headers = W.headers - def write(a: FieldType[K, H] :: HNil): CSV.Row = + def write(a: FieldType[K, H] :: HNil): CSV.Row = W.write(a.head) } - -} - - +} diff --git a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/semiauto.scala b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/semiauto.scala index 552bc2c0..359aa0e5 100644 --- a/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/semiauto.scala +++ b/modules/generic/src/main/scala/io/chrisdavenport/cormorant/generic/semiauto.scala @@ -3,46 +3,49 @@ package io.chrisdavenport.cormorant.generic import io.chrisdavenport.cormorant._ import shapeless._ -object semiauto - extends internal.LabelledReadProofs - with internal.LabelledWriteProofs - with internal.ReadProofs - with internal.WriteProofs { - - def deriveWrite[A, R]( - implicit gen: Generic.Aux[A, R], +object semiauto + extends internal.LabelledReadProofs + with internal.LabelledWriteProofs + with internal.ReadProofs + with internal.WriteProofs { + + def deriveWrite[A, R](implicit + gen: Generic.Aux[A, R], enc: Write[R] ): Write[A] = new Write[A] { def write(a: A): CSV.Row = Write[gen.Repr].write(gen.to(a)) } - - def deriveLabelledWrite[A, H <: HList]( - implicit gen: LabelledGeneric.Aux[A, H], - hlw: Lazy[LabelledWrite[H]]): LabelledWrite[A] = new LabelledWrite[A] { + def deriveLabelledWrite[A, H <: HList](implicit + gen: LabelledGeneric.Aux[A, H], + hlw: Lazy[LabelledWrite[H]] + ): LabelledWrite[A] = new LabelledWrite[A] { val writeH: LabelledWrite[H] = hlw.value def headers: CSV.Headers = writeH.headers def write(a: A): CSV.Row = writeH.write(gen.to(a)) } - def deriveRead[A, R]( - implicit gen: Generic.Aux[A, R], + def deriveRead[A, R](implicit + gen: Generic.Aux[A, R], R: Lazy[Read[R]] ): Read[A] = new Read[A] { - def readPartial(a: CSV.Row): Either[Error.DecodeFailure, Either[(CSV.Row, A), A]] ={ - R.value.readPartial(a).map{ + def readPartial( + a: CSV.Row + ): Either[Error.DecodeFailure, Either[(CSV.Row, A), A]] = { + R.value.readPartial(a).map { case Left((csv, r)) => Left((csv, gen.from(r))) - case Right(r) => Right(gen.from(r)) + case Right(r) => Right(gen.from(r)) } } } - def deriveLabelledRead[A, H <: HList]( - implicit gen: LabelledGeneric.Aux[A, H], - hlw: Lazy[LabelledRead[H]]): LabelledRead[A] = new LabelledRead[A] { + def deriveLabelledRead[A, H <: HList](implicit + gen: LabelledGeneric.Aux[A, H], + hlw: Lazy[LabelledRead[H]] + ): LabelledRead[A] = new LabelledRead[A] { val readH: LabelledRead[H] = hlw.value def read(a: CSV.Row, h: CSV.Headers): Either[Error.DecodeFailure, A] = readH.read(a, h).map(gen.from(_)) } -} \ No newline at end of file +} diff --git a/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/CSVLikeParser.scala b/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/CSVLikeParser.scala index eb050c3a..16fd249b 100644 --- a/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/CSVLikeParser.scala +++ b/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/CSVLikeParser.scala @@ -5,68 +5,59 @@ import atto._ import Atto._ import cats.data._ import cats.syntax.all._ -/** - * This CSVParser tries to stay fairly close to the initial specification - * https://tools.ietf.org/html/rfc4180 - * - * Deviations from the specification, here I have chosen to use a - * permissive CRLF that will accept a CRLF, LF, or CR. - * Note that the CR is not directly in the initial spec, but in rare - * cases csvs can have this delimiter - * - * The important details are as follows - * 1. Each record is located on a separate line, delimited by a line - * break (CRLF). For example: - * - * aaa,bbb,ccc CRLF - * zzz,yyy,xxx CRLF - * - * 2. The last record in the file may or may not have an ending line - * break. For example: - * - * aaa,bbb,ccc CRLF - * zzz,yyy,xxx - * - * 3. There maybe an optional header line appearing as the first line - * of the file with the same format as normal record lines. This - * header will contain names corresponding to the fields in the file - * and should contain the same number of fields as the records in - * the rest of the file (the presence or absence of the header line - * should be indicated via the optional "header" parameter of this - * MIME type). For example: - * - * field_name,field_name,field_name CRLF - * aaa,bbb,ccc CRLF - * zzz,yyy,xxx CRLF - * 4. Within the header and each record, there may be one or more - * fields, separated by commas. Each line should contain the same - * number of fields throughout the file. Spaces are considered part - * of a field and should not be ignored. The last field in the - * record must not be followed by a comma. For example: - * - * aaa,bbb,ccc - * - * 5. Each field may or may not be enclosed in double quotes (however - * some programs, such as Microsoft Excel, do not use double quotes - * at all). If fields are not enclosed with double quotes, then - * double quotes may not appear inside the fields. For example: - * - * "aaa","bbb","ccc" CRLF - * zzz,yyy,xxx - * - * 6. Fields containing line breaks (CRLF), double quotes, and commas - * should be enclosed in double-quotes. For example: - * - * "aaa","b CRLF - * bb","ccc" CRLF - * zzz,yyy,xxx - * - * 7. If double-quotes are used to enclose fields, then a double-quote - * appearing inside a field must be escaped by preceding it with - * another double quote. For example: - * - * "aaa","b""bb","ccc" - **/ + +/** This CSVParser tries to stay fairly close to the initial specification + * https://tools.ietf.org/html/rfc4180 + * + * Deviations from the specification, here I have chosen to use a permissive + * CRLF that will accept a CRLF, LF, or CR. Note that the CR is not directly in + * the initial spec, but in rare cases csvs can have this delimiter + * + * The important details are as follows + * 1. Each record is located on a separate line, delimited by a line break + * (CRLF). For example: + * + * aaa,bbb,ccc CRLF zzz,yyy,xxx CRLF + * + * 2. The last record in the file may or may not have an ending line break. For + * example: + * + * aaa,bbb,ccc CRLF zzz,yyy,xxx + * + * 3. There maybe an optional header line appearing as the first line of the + * file with the same format as normal record lines. This header will contain + * names corresponding to the fields in the file and should contain the same + * number of fields as the records in the rest of the file (the presence or + * absence of the header line should be indicated via the optional "header" + * parameter of this MIME type). For example: + * + * field_name,field_name,field_name CRLF aaa,bbb,ccc CRLF zzz,yyy,xxx CRLF 4. + * Within the header and each record, there may be one or more fields, + * separated by commas. Each line should contain the same number of fields + * throughout the file. Spaces are considered part of a field and should not be + * ignored. The last field in the record must not be followed by a comma. For + * example: + * + * aaa,bbb,ccc + * + * 5. Each field may or may not be enclosed in double quotes (however some + * programs, such as Microsoft Excel, do not use double quotes at all). If + * fields are not enclosed with double quotes, then double quotes may not + * appear inside the fields. For example: + * + * "aaa","bbb","ccc" CRLF zzz,yyy,xxx + * + * 6. Fields containing line breaks (CRLF), double quotes, and commas should be + * enclosed in double-quotes. For example: + * + * "aaa","b CRLF bb","ccc" CRLF zzz,yyy,xxx + * + * 7. If double-quotes are used to enclose fields, then a double-quote + * appearing inside a field must be escaped by preceding it with another double + * quote. For example: + * + * "aaa","b""bb","ccc" + */ abstract class CSVLikeParser(val separator: Char) { val dquote: Char = '\"' val dquoteS: String = dquote.toString @@ -80,7 +71,7 @@ abstract class CSVLikeParser(val separator: Char) { val DQUOTE: Parser[Char] = char(dquote) // Used For Easier Composition in escaped spec is referred to as 2DQUOTE val TWO_DQUOTE: Parser[(Char, Char)] = DQUOTE ~ DQUOTE - //CR = %x0D ;as per section 6.1 of RFC 2234 [2] + // CR = %x0D ;as per section 6.1 of RFC 2234 [2] val CR: Parser[Char] = char(cr) // LF = %x0A ;as per section 6.1 of RFC 2234 [2] val LF: Parser[Char] = char(lf) @@ -95,11 +86,14 @@ abstract class CSVLikeParser(val separator: Char) { val SEPARATOR: Parser[Char] = char(separator).named("SEPARATOR") // TEXTDATA = %x20-21 / %x23-2B / %x2D-7E - val TEXTDATA: Parser[Char] = noneOf(dquoteS + separatorS + crS + lfS).named("TEXTDATA") + val TEXTDATA: Parser[Char] = + noneOf(dquoteS + separatorS + crS + lfS).named("TEXTDATA") // escaped = DQUOTE *(TEXTDATA / COMMA / CR / LF / 2DQUOTE) DQUOTE val escaped: Parser[CSV.Field] = - (DQUOTE ~> many(TEXTDATA | SEPARATOR | CR | LF | TWO_DQUOTE.map(_ => dquote)) + (DQUOTE ~> many( + TEXTDATA | SEPARATOR | CR | LF | TWO_DQUOTE.map(_ => dquote) + ) .map(_.mkString) .map(CSV.Field) <~ DQUOTE) .named("escaped") diff --git a/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/package.scala b/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/package.scala index 29d89eea..590c02fc 100644 --- a/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/package.scala +++ b/modules/parser/src/main/scala/io/chrisdavenport/cormorant/parser/package.scala @@ -9,10 +9,16 @@ import Atto._ package object parser { object CSVParser extends CSVLikeParser(',') - def parseField(text: String, parser: CSVLikeParser = CSVParser): Either[ParseFailure, CSV.Field] = + def parseField( + text: String, + parser: CSVLikeParser = CSVParser + ): Either[ParseFailure, CSV.Field] = parser.field.parseOnly(text).either.leftMap(ParseFailure.apply) - def parseRow(text: String, parser: CSVLikeParser = CSVParser): Either[ParseFailure, CSV.Row] = + def parseRow( + text: String, + parser: CSVLikeParser = CSVParser + ): Either[ParseFailure, CSV.Row] = parser.record.parseOnly(text).either.leftMap(ParseFailure.apply) def parseHeader( @@ -42,7 +48,8 @@ package object parser { // same number of fields. We use this to remove this data we know to // be unclear in the specification. In CSV.Rows, we use the first row // as the size of reference - case rows @ CSV.Rows(CSV.Row(x) :: _) if cleanup && x.size > 1 => filterLastRowIfEmpty(rows) + case rows @ CSV.Rows(CSV.Row(x) :: _) if cleanup && x.size > 1 => + filterLastRowIfEmpty(rows) case otherwise => otherwise } @@ -55,39 +62,48 @@ package object parser { .parseOnly(text) .either .leftMap(ParseFailure.apply) - .map { - case c @ CSV.Complete(h @ CSV.Headers(headers), rows) => - // Due to The Grammar Being Unclear CRLF can and will be parsed as - // a field. However the specification states that each must have the - // same number of fields. We use this to remove this data we know to - // be unclear in the specification. In CSV.Complete, we use headers - // as the size of reference. - if (cleanup && headers.size > 1) { - CSV.Complete(h, filterLastRowIfEmpty(rows)) - } else { - c - } + .map { case c @ CSV.Complete(h @ CSV.Headers(headers), rows) => + // Due to The Grammar Being Unclear CRLF can and will be parsed as + // a field. However the specification states that each must have the + // same number of fields. We use this to remove this data we know to + // be unclear in the specification. In CSV.Complete, we use headers + // as the size of reference. + if (cleanup && headers.size > 1) { + CSV.Complete(h, filterLastRowIfEmpty(rows)) + } else { + c + } } object TSVParser extends CSVLikeParser('\t') - def parseTSVField(text: String): Either[ParseFailure, CSV.Field] = parseField(text, TSVParser) + def parseTSVField(text: String): Either[ParseFailure, CSV.Field] = + parseField(text, TSVParser) - def parseTSVRow(text: String): Either[ParseFailure, CSV.Row] = parseRow(text, TSVParser) + def parseTSVRow(text: String): Either[ParseFailure, CSV.Row] = + parseRow(text, TSVParser) - def parseTSVHeader(text: String): Either[ParseFailure, CSV.Header] = parseHeader(text, TSVParser) + def parseTSVHeader(text: String): Either[ParseFailure, CSV.Header] = + parseHeader(text, TSVParser) def parseTSVHeaders(text: String): Either[ParseFailure, CSV.Headers] = parseHeaders(text, TSVParser) - def parseTSVRows(text: String, cleanup: Boolean = true): Either[ParseFailure, CSV.Rows] = + def parseTSVRows( + text: String, + cleanup: Boolean = true + ): Either[ParseFailure, CSV.Rows] = parseRows(text, cleanup, TSVParser) - def parseTSVComplete(text: String, cleanup: Boolean = true): Either[ParseFailure, CSV.Complete] = + def parseTSVComplete( + text: String, + cleanup: Boolean = true + ): Either[ParseFailure, CSV.Complete] = parseComplete(text, cleanup, TSVParser) private def filterLastRowIfEmpty(rows: CSV.Rows): CSV.Rows = { rows.rows.reverse match { - case x :: xl if x == CSV.Row(NonEmptyList(CSV.Field(""), Nil)) => CSV.Rows(xl.reverse) + case x :: xl if x == CSV.Row(NonEmptyList(CSV.Field(""), Nil)) => + CSV.Rows(xl.reverse) case _ => rows } } diff --git a/modules/refined/src/main/scala/io/chrisdavenport/cormorant/refined/package.scala b/modules/refined/src/main/scala/io/chrisdavenport/cormorant/refined/package.scala index e2ba9fd5..72471a93 100644 --- a/modules/refined/src/main/scala/io/chrisdavenport/cormorant/refined/package.scala +++ b/modules/refined/src/main/scala/io/chrisdavenport/cormorant/refined/package.scala @@ -5,21 +5,21 @@ import eu.timepit.refined.api.{RefType, Validate} package object refined { - implicit final def refinedPut[T, P, F[_, _]]( - implicit + implicit final def refinedPut[T, P, F[_, _]](implicit underlying: Put[T], - refType: RefType[F]): Put[F[T, P]] = underlying.contramap(refType.unwrap) + refType: RefType[F] + ): Put[F[T, P]] = underlying.contramap(refType.unwrap) - implicit final def refinedGet[T, P, F[_, _]]( - implicit + implicit final def refinedGet[T, P, F[_, _]](implicit underlying: Get[T], validate: Validate[T, P], - refType: RefType[F]): Get[F[T, P]] = new Get[F[T, P]] { + refType: RefType[F] + ): Get[F[T, P]] = new Get[F[T, P]] { def get(field: CSV.Field): Either[Error.DecodeFailure, F[T, P]] = underlying.get(field) match { case Right(t) => refType.refine(t) match { - case Left(err) => Either.left(Error.DecodeFailure.single(err)) + case Left(err) => Either.left(Error.DecodeFailure.single(err)) case Right(ftp) => Either.right(ftp) } case Left(d) => Either.left(d) From ae1dd6578511e50e971dec12387b85b15473bbe7 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:29:26 -0400 Subject: [PATCH 08/15] add ci scripts --- scripts/build-scala-ci.sh | 15 +++++++++++++++ scripts/sbt-ci.sh | 4 ++++ 2 files changed, 19 insertions(+) create mode 100755 scripts/build-scala-ci.sh create mode 100755 scripts/sbt-ci.sh diff --git a/scripts/build-scala-ci.sh b/scripts/build-scala-ci.sh new file mode 100755 index 00000000..5257f646 --- /dev/null +++ b/scripts/build-scala-ci.sh @@ -0,0 +1,15 @@ +#!/bin/sh -e + +usage() { + echo "Usage: $0 GIT_BRANCH" + exit 1 +} + +if [ "$#" -ne 1 ]; then + usage +fi + +git_branch="$1" + +sbt "reload:update" +./scripts/sbt-ci.sh "$EXTRA_SBT_ARGS" diff --git a/scripts/sbt-ci.sh b/scripts/sbt-ci.sh new file mode 100755 index 00000000..af4b1773 --- /dev/null +++ b/scripts/sbt-ci.sh @@ -0,0 +1,4 @@ +#!/bin/sh -e +export NEXUS_USER=jenkins +export SBT_OPTS="-Xms2G -Xmx8G -Xss2M -XX:MaxMetaspaceSize=8G" +exec sbt "-Dsbt.ci=true" "$@" From 9697ea5713123f5a6b4bcb4ca40ced4b91a726ee Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:33:13 -0400 Subject: [PATCH 09/15] more formatting --- build.sbt | 6 +- .../cormorant/CormorantArbitraries.scala | 49 +++-- .../cormorant/PrinterSpec.scala | 21 ++- .../cormorant/fs2/package.scala | 2 +- .../cormorant/fs2/StreamingParserSpec.scala | 27 ++- .../cormorant/fs2/StreamingPrinterSpec.scala | 6 +- .../cormorant/generic/AutoSpec.scala | 36 +++- .../cormorant/generic/SemiAutoSpec.scala | 120 +++++++++---- .../cormorant/parser/CSVParserSpecs.scala | 170 ++++++++++++------ .../cormorant/parser/TSVParserSpecs.scala | 170 ++++++++++++------ .../cormorant/refined/RefinedSpec.scala | 12 +- project/Libraries.scala | 13 +- project/plugins.sbt | 16 +- 13 files changed, 451 insertions(+), 197 deletions(-) diff --git a/build.sbt b/build.sbt index 597cae65..3e0dbadb 100644 --- a/build.sbt +++ b/build.sbt @@ -31,8 +31,10 @@ val commonSettings = Seq( Some("releases" at release) }, scalacOptions ++= addScalacOptions, - addCompilerPlugin("org.typelevel" %% "kind-projector" % "0.13.2" cross CrossVersion.full), - addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.3.1"), + addCompilerPlugin( + "org.typelevel" %% "kind-projector" % "0.13.2" cross CrossVersion.full + ), + addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.3.1"), testFrameworks += new TestFramework("munit.Framework"), libraryDependencies ++= Seq( MUnitTest, diff --git a/modules/core/src/test/scala/io/chrisdavenport/cormorant/CormorantArbitraries.scala b/modules/core/src/test/scala/io/chrisdavenport/cormorant/CormorantArbitraries.scala index 8765bb43..0379726b 100644 --- a/modules/core/src/test/scala/io/chrisdavenport/cormorant/CormorantArbitraries.scala +++ b/modules/core/src/test/scala/io/chrisdavenport/cormorant/CormorantArbitraries.scala @@ -4,21 +4,20 @@ import org.scalacheck._ import _root_.cats.data._ trait CormorantArbitraries { - // Necessary for Round-tripping for fs2. As we can't always clarify the empty string following - // semantics. We use printable to remove the subset that doesn't pass through utf8 encoding - implicit val arbField : Arbitrary[CSV.Field] = Arbitrary( - for { - char <- Gen.asciiPrintableChar - string <- Gen.asciiPrintableStr - } yield CSV.Field(char.toString() + string) - ) - + // Necessary for Round-tripping for fs2. As we can't always clarify the empty string following + // semantics. We use printable to remove the subset that doesn't pass through utf8 encoding + implicit val arbField: Arbitrary[CSV.Field] = Arbitrary( + for { + char <- Gen.asciiPrintableChar + string <- Gen.asciiPrintableStr + } yield CSV.Field(char.toString() + string) + ) // Must be 1 or More def genRow(s: Int): Gen[CSV.Row] = for { - field <- Arbitrary.arbitrary[CSV.Field] - list <- Gen.listOfN(s - 1, Arbitrary.arbitrary[CSV.Field]) - } yield CSV.Row(NonEmptyList(field, list)) + field <- Arbitrary.arbitrary[CSV.Field] + list <- Gen.listOfN(s - 1, Arbitrary.arbitrary[CSV.Field]) + } yield CSV.Row(NonEmptyList(field, list)) implicit val arbRow: Arbitrary[CSV.Row] = Arbitrary( for { @@ -29,11 +28,11 @@ trait CormorantArbitraries { // Must be 1 or more def genRows(s: Int): Gen[CSV.Rows] = for { - row <- genRow(s) - l <- Gen.listOf(genRow(s)) - } yield CSV.Rows(row :: l) + row <- genRow(s) + l <- Gen.listOf(genRow(s)) + } yield CSV.Rows(row :: l) - implicit val arbRows : Arbitrary[CSV.Rows] = Arbitrary( + implicit val arbRows: Arbitrary[CSV.Rows] = Arbitrary( for { choose <- Gen.choose(1, 25) rows <- genRows(choose) @@ -41,27 +40,27 @@ trait CormorantArbitraries { ) // Same logic as fields - implicit val arbHeader : Arbitrary[CSV.Header] = Arbitrary( + implicit val arbHeader: Arbitrary[CSV.Header] = Arbitrary( for { char <- Gen.asciiPrintableChar - string <- Gen.asciiPrintableStr - } yield CSV.Header(char.toString() + string) + string <- Gen.asciiPrintableStr + } yield CSV.Header(char.toString() + string) ) // Must be 1 or more def genHeaders(s: Int): Gen[CSV.Headers] = for { - header <- Arbitrary.arbitrary[CSV.Header] - list <- Gen.listOfN(s- 1, Arbitrary.arbitrary[CSV.Header]) - } yield CSV.Headers(NonEmptyList(header,list)) + header <- Arbitrary.arbitrary[CSV.Header] + list <- Gen.listOfN(s - 1, Arbitrary.arbitrary[CSV.Header]) + } yield CSV.Headers(NonEmptyList(header, list)) - implicit val arbHeaders : Arbitrary[CSV.Headers] = Arbitrary( + implicit val arbHeaders: Arbitrary[CSV.Headers] = Arbitrary( for { choose <- Gen.choose(1, 25) headers <- genHeaders(choose) } yield headers ) - implicit val arbComplete : Arbitrary[CSV.Complete] = Arbitrary( + implicit val arbComplete: Arbitrary[CSV.Complete] = Arbitrary( for { choose <- Gen.choose(1, 25) headers <- genHeaders(choose) @@ -71,4 +70,4 @@ trait CormorantArbitraries { } -object CormorantArbitraries extends CormorantArbitraries \ No newline at end of file +object CormorantArbitraries extends CormorantArbitraries diff --git a/modules/core/src/test/scala/io/chrisdavenport/cormorant/PrinterSpec.scala b/modules/core/src/test/scala/io/chrisdavenport/cormorant/PrinterSpec.scala index c96446b3..c5508f07 100644 --- a/modules/core/src/test/scala/io/chrisdavenport/cormorant/PrinterSpec.scala +++ b/modules/core/src/test/scala/io/chrisdavenport/cormorant/PrinterSpec.scala @@ -7,13 +7,26 @@ class PrinterSpec extends munit.FunSuite { test("Print a simple csv") { val csv = CSV.Complete( CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList + .of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) ), CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))), - CSV.Row(NonEmptyList.of(CSV.Field("Yellow"), CSV.Field("Broccoli"), CSV.Field("3"))) + CSV.Row( + NonEmptyList + .of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList + .of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Yellow"), + CSV.Field("Broccoli"), + CSV.Field("3") + ) + ) ) ) ) diff --git a/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala b/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala index 53e0a6ee..177d7cef 100644 --- a/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala +++ b/modules/fs2/src/main/scala/io/chrisdavenport/cormorant/fs2/package.scala @@ -223,7 +223,7 @@ package object fs2 { * from a valid `LabelledWrite[A]` implicit instance. * * @example - * {{{ Stream .emits(list) .through(writeLabelled(Printer.default)) }}} + * {{{Stream .emits(list) .through(writeLabelled(Printer.default))}}} */ def writeLabelled[F[_], A: LabelledWrite](p: Printer): Pipe[F, A, String] = s => diff --git a/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingParserSpec.scala b/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingParserSpec.scala index 7f0ca492..13d2a22b 100644 --- a/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingParserSpec.scala +++ b/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingParserSpec.scala @@ -13,15 +13,18 @@ class StreamingParserSpec extends CatsEffectSuite { def ruinDelims(str: String) = augmentString(str).flatMap { case '\n' => "\r\n" - case c => c.toString + case c => c.toString } // https://github.com/ChristopherDavenport/cormorant/pull/84 - test("Streaming Parser parses a known value that did not work with streaming") { + test( + "Streaming Parser parses a known value that did not work with streaming" + ) { val x = """First Name,Last Name,Email Larry,Bordowitz,larry@example.com Anonymous,Hippopotamus,hippo@example.com""" - val source = IO.pure(new ByteArrayInputStream(ruinDelims(x).getBytes): InputStream) + val source = + IO.pure(new ByteArrayInputStream(ruinDelims(x).getBytes): InputStream) _root_.fs2.io .readInputStream( source, @@ -33,15 +36,27 @@ Anonymous,Hippopotamus,hippo@example.com""" .toVector .map { v => val header = CSV.Headers( - NonEmptyList.of(CSV.Header("First Name"), CSV.Header("Last Name"), CSV.Header("Email")) + NonEmptyList.of( + CSV.Header("First Name"), + CSV.Header("Last Name"), + CSV.Header("Email") + ) ) val row1 = CSV.Row( NonEmptyList - .of(CSV.Field("Larry"), CSV.Field("Bordowitz"), CSV.Field("larry@example.com")) + .of( + CSV.Field("Larry"), + CSV.Field("Bordowitz"), + CSV.Field("larry@example.com") + ) ) val row2 = CSV.Row( NonEmptyList - .of(CSV.Field("Anonymous"), CSV.Field("Hippopotamus"), CSV.Field("hippo@example.com")) + .of( + CSV.Field("Anonymous"), + CSV.Field("Hippopotamus"), + CSV.Field("hippo@example.com") + ) ) assertEquals( Vector( diff --git a/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingPrinterSpec.scala b/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingPrinterSpec.scala index f09cf4ae..746a6252 100644 --- a/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingPrinterSpec.scala +++ b/modules/fs2/src/test/scala/io/chrisdavenport/cormorant/fs2/StreamingPrinterSpec.scala @@ -72,7 +72,11 @@ class StreamingPrinterSuite implicit val L: LabelledWrite[Foo] = new LabelledWrite[Foo] { override def headers: CSV.Headers = CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList.of( + CSV.Header("Color"), + CSV.Header("Food"), + CSV.Header("Number") + ) ) override def write(a: Foo): CSV.Row = diff --git a/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/AutoSpec.scala b/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/AutoSpec.scala index 29988ac1..c050bec0 100644 --- a/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/AutoSpec.scala +++ b/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/AutoSpec.scala @@ -10,8 +10,10 @@ class AutoSpec extends munit.FunSuite { test("encode a row with Write automatically") { case class Example(i: Int, s: String, b: Int) - val encoded = Example(1,"Hello",73).writeRow - val expected = CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))) + val encoded = Example(1, "Hello", 73).writeRow + val expected = CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) assertEquals(encoded, expected) } @@ -20,15 +22,25 @@ class AutoSpec extends munit.FunSuite { val encoded = List(Example(1, Option("Hello"), 73)).writeComplete val expected = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("i"), CSV.Header("s"), CSV.Header("b"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))))) + CSV.Headers( + NonEmptyList.of(CSV.Header("i"), CSV.Header("s"), CSV.Header("b")) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) + ) + ) ) assertEquals(encoded, expected) } test("read a row with read automatically") { case class Example(i: Int, s: Option[String], b: Int) - val from = CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))) + val from = CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) val expected = Example(1, Some("Hello"), 73) assertEquals(from.readRow[Example], Right(expected)) } @@ -40,11 +52,19 @@ class AutoSpec extends munit.FunSuite { // Notice That the order is different than the example above val fromCSV = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1"))))) + CSV.Headers( + NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i")) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1")) + ) + ) + ) ) val expected = List(Example(1, Option("Hello"), 73)).map(Either.right) assertEquals(fromCSV.readLabelled[Example], expected) } -} \ No newline at end of file +} diff --git a/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/SemiAutoSpec.scala b/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/SemiAutoSpec.scala index 093bd64a..23a11bfb 100644 --- a/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/SemiAutoSpec.scala +++ b/modules/generic/src/test/scala/io/chrisdavenport/cormorant/generic/SemiAutoSpec.scala @@ -12,9 +12,11 @@ class SemiAutoSpec extends munit.FunSuite { case class Example(i: Int, s: String, b: Int) implicit val writeExample: Write[Example] = deriveWrite - val encoded = Encoding.writeRow(Example(1,"Hello",73)) - val expected = CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))) - + val encoded = Encoding.writeRow(Example(1, "Hello", 73)) + val expected = CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) + assertEquals(encoded, expected) } @@ -24,8 +26,16 @@ class SemiAutoSpec extends munit.FunSuite { val encoded = Encoding.writeComplete(List(Example(1, Option("Hello"), 73))) val expected = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("i"), CSV.Header("s"), CSV.Header("b"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))))) + CSV.Headers( + NonEmptyList.of(CSV.Header("i"), CSV.Header("s"), CSV.Header("b")) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) + ) + ) ) assertEquals(encoded, expected) } @@ -33,36 +43,49 @@ class SemiAutoSpec extends munit.FunSuite { test("read a correctly encoded row") { case class Example(i: Int, s: Option[String], b: Int) implicit val derivedRead: Read[Example] = deriveRead - val from = CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73"))) + val from = CSV.Row( + NonEmptyList.of(CSV.Field("1"), CSV.Field("Hello"), CSV.Field("73")) + ) val expected = Example(1, Some("Hello"), 73) - assertEquals(Read[Example].read(from), Right(expected) ) + assertEquals(Read[Example].read(from), Right(expected)) } test("read a labelledRead row by name") { import cats.syntax.either._ case class Example(i: Int, s: Option[String], b: Int) - implicit val labelledReadExampled : LabelledRead[Example] = deriveLabelledRead + implicit val labelledReadExampled: LabelledRead[Example] = + deriveLabelledRead // Notice That the order is different than the example above val fromCSV = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1"))))) + CSV.Headers( + NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i")) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1")) + ) + ) + ) ) val expected = List(Example(1, Option("Hello"), 73)).map(Either.right) - + assertEquals(Decoding.readLabelled[Example](fromCSV), expected) } test("read a product field row") { case class Foo(i: Int) case class Example(i: Foo, s: Option[String], b: Int) - implicit val f : Read[Foo] = deriveRead + implicit val f: Read[Foo] = deriveRead val _ = f - implicit val r : Read[Example] = deriveRead + implicit val r: Read[Example] = deriveRead - val fromCSV = - CSV.Row(NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1"))) + val fromCSV = + CSV.Row( + NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1")) + ) assertEquals(r.read(fromCSV), Right(Example(Foo(73), Some("Hello"), 1))) } @@ -70,43 +93,78 @@ class SemiAutoSpec extends munit.FunSuite { test("write a product field row") { case class Foo(i: Int, x: String) case class Example(i: Foo, s: Option[String], b: Int) - implicit val f : Write[Foo] = deriveWrite + implicit val f: Write[Foo] = deriveWrite val _ = f - implicit val r : Write[Example] = deriveWrite + implicit val r: Write[Example] = deriveWrite val input = Example(Foo(73, "yellow"), Some("foo"), 5) - assertEquals(r.write(input), CSV.Row( - NonEmptyList.of(CSV.Field("73"), CSV.Field("yellow"), CSV.Field("foo"), CSV.Field("5")) - )) + assertEquals( + r.write(input), + CSV.Row( + NonEmptyList.of( + CSV.Field("73"), + CSV.Field("yellow"), + CSV.Field("foo"), + CSV.Field("5") + ) + ) + ) } test("read a labelled product field row") { case class Foo(i: Int) case class Example(i: Foo, s: Option[String], b: Int) - implicit val f : LabelledRead[Foo] = deriveLabelledRead + implicit val f: LabelledRead[Foo] = deriveLabelledRead val _ = f - implicit val r : LabelledRead[Example] = deriveLabelledRead + implicit val r: LabelledRead[Example] = deriveLabelledRead val fromCSV = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1"))))) + CSV.Headers( + NonEmptyList.of(CSV.Header("b"), CSV.Header("s"), CSV.Header("i")) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of(CSV.Field("73"), CSV.Field("Hello"), CSV.Field("1")) + ) + ) + ) ) val expected = List(Example(Foo(1), Option("Hello"), 73)) .map(Either.right) - + assertEquals(Decoding.readLabelled[Example](fromCSV), expected) } test("write a labelled product field row") { case class Foo(i: Int, m: String) case class Example(i: Foo, s: Option[String], b: Int) - implicit val f : LabelledWrite[Foo] = deriveLabelledWrite + implicit val f: LabelledWrite[Foo] = deriveLabelledWrite val _ = f - implicit val w : LabelledWrite[Example] = deriveLabelledWrite - val encoded = Encoding.writeComplete(List(Example(Foo(1, "bar"), Option("Hello"), 73))) + implicit val w: LabelledWrite[Example] = deriveLabelledWrite + val encoded = + Encoding.writeComplete(List(Example(Foo(1, "bar"), Option("Hello"), 73))) val expected = CSV.Complete( - CSV.Headers(NonEmptyList.of(CSV.Header("i"), CSV.Header("m"), CSV.Header("s"), CSV.Header("b"))), - CSV.Rows(List(CSV.Row(NonEmptyList.of(CSV.Field("1"), CSV.Field("bar"), CSV.Field("Hello"), CSV.Field("73"))))) + CSV.Headers( + NonEmptyList.of( + CSV.Header("i"), + CSV.Header("m"), + CSV.Header("s"), + CSV.Header("b") + ) + ), + CSV.Rows( + List( + CSV.Row( + NonEmptyList.of( + CSV.Field("1"), + CSV.Field("bar"), + CSV.Field("Hello"), + CSV.Field("73") + ) + ) + ) + ) ) assertEquals(encoded, expected) } -} \ No newline at end of file +} diff --git a/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/CSVParserSpecs.scala b/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/CSVParserSpecs.scala index 208dabe5..e43be557 100644 --- a/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/CSVParserSpecs.scala +++ b/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/CSVParserSpecs.scala @@ -11,14 +11,20 @@ class CSVParserSpec extends munit.FunSuite { test("parse a single header") { val basicString = "Something," val expect = CSV.Header("Something") - assertEquals(CSVParser.name.parse(basicString).done, ParseResult.Done(",", expect)) + assertEquals( + CSVParser.name.parse(basicString).done, + ParseResult.Done(",", expect) + ) } test("parse first header in a header list") { val baseHeader = "Something,Something2,Something3" val expect = CSV.Header("Something") - assertEquals(CSVParser.name.parse(baseHeader), ParseResult.Done(",Something2,Something3", expect)) + assertEquals( + CSVParser.name.parse(baseHeader), + ParseResult.Done(",Something2,Something3", expect) + ) } test("parse a group of headers") { @@ -28,7 +34,10 @@ class CSVParserSpec extends munit.FunSuite { CSV.Header("Something2"), CSV.Header("Something3") ) - val result = (CSVParser.name, many(CSVParser.SEPARATOR ~> CSVParser.name)).mapN(_ :: _).parse(baseHeader).done + val result = (CSVParser.name, many(CSVParser.SEPARATOR ~> CSVParser.name)) + .mapN(_ :: _) + .parse(baseHeader) + .done assertEquals(result, ParseResult.Done("", expect)) } @@ -62,25 +71,49 @@ class CSVParserSpec extends munit.FunSuite { test("parse rows correctly") { val csv = CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))) + CSV.Row( + NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Red"), + CSV.Field("Margarine"), + CSV.Field("2") + ) + ) ) ) val csvParse = """Blue,Pizza,1 |Red,Margarine,2""".stripMargin - assertEquals(CSVParser.fileBody.parse(csvParse).done.either, Either.right(csv)) + assertEquals( + CSVParser.fileBody.parse(csvParse).done.either, + Either.right(csv) + ) } test("complete a csv parse") { val csv = CSV.Complete( CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList + .of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) ), CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))), - CSV.Row(NonEmptyList.of(CSV.Field("Yellow"), CSV.Field("Broccoli"), CSV.Field("3"))) + CSV.Row( + NonEmptyList + .of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList + .of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Yellow"), + CSV.Field("Broccoli"), + CSV.Field("3") + ) + ) ) ) ) @@ -89,22 +122,38 @@ class CSVParserSpec extends munit.FunSuite { |Red,Margarine,2 |Yellow,Broccoli,3""".stripMargin - assertEquals(CSVParser.`complete-file` - .parse(expectedCSVString) - .done - .either, Either.right(csv)) + assertEquals( + CSVParser.`complete-file` + .parse(expectedCSVString) + .done + .either, + Either.right(csv) + ) } test("parse a complete csv with a trailing new line by stripping it") { val csv = CSV.Complete( CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList + .of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) ), CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))), - CSV.Row(NonEmptyList.of(CSV.Field("Yellow"), CSV.Field("Broccoli"), CSV.Field("3"))) + CSV.Row( + NonEmptyList + .of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList + .of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Yellow"), + CSV.Field("Broccoli"), + CSV.Field("3") + ) + ) ) ) ) @@ -114,53 +163,74 @@ class CSVParserSpec extends munit.FunSuite { |Yellow,Broccoli,3 |""".stripMargin - assertEquals(CSVParser.`complete-file` - .parse(expectedCSVString) - .done - .either - .map(_.stripTrailingRow), Either.right(csv)) + assertEquals( + CSVParser.`complete-file` + .parse(expectedCSVString) + .done + .either + .map(_.stripTrailingRow), + Either.right(csv) + ) } test("parse an escaped row with a comma") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow,Dog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow,Dog"), + CSV.Field("Blue") + ) + ) val parseString = "Green,\"Yellow,Dog\",Blue" - assertEquals(CSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + CSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } test("parse an escaped row with a double quote escaped") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow, \"Dog\""), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow, \"Dog\""), + CSV.Field("Blue") + ) + ) val parseString = "Green,\"Yellow, \"\"Dog\"\"\",Blue" - assertEquals(CSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + CSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } - - test("parse an escaped row with embedded newline") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\n Dog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\n Dog"), + CSV.Field("Blue") + ) + ) val parseString = "Green,\"Yellow\n Dog\",Blue" - assertEquals(CSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + CSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } test("parse an escaped row with embedded CRLF") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\r\n Dog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\r\n Dog"), + CSV.Field("Blue") + ) + ) val parseString = "Green,\"Yellow\r\n Dog\",Blue" - assertEquals(CSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + CSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } -} \ No newline at end of file +} diff --git a/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/TSVParserSpecs.scala b/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/TSVParserSpecs.scala index f3129c87..29d774e2 100644 --- a/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/TSVParserSpecs.scala +++ b/modules/parser/src/test/scala/io/chrisdavenport/cormorant/parser/TSVParserSpecs.scala @@ -11,14 +11,20 @@ class TSVParserSpec extends munit.FunSuite { test("parse a single header") { val basicString = "Something\t" val expect = CSV.Header("Something") - assertEquals(TSVParser.name.parse(basicString).done, ParseResult.Done("\t", expect)) + assertEquals( + TSVParser.name.parse(basicString).done, + ParseResult.Done("\t", expect) + ) } test("parse first header in a header list") { val baseHeader = "Something\tSomething2\tSomething3" val expect = CSV.Header("Something") - assertEquals(TSVParser.name.parse(baseHeader), ParseResult.Done("\tSomething2\tSomething3", expect)) + assertEquals( + TSVParser.name.parse(baseHeader), + ParseResult.Done("\tSomething2\tSomething3", expect) + ) } test("parse a group of headers") { @@ -28,7 +34,10 @@ class TSVParserSpec extends munit.FunSuite { CSV.Header("Something2"), CSV.Header("Something3") ) - val result = (TSVParser.name, many(TSVParser.SEPARATOR ~> TSVParser.name)).mapN(_ :: _).parse(baseHeader).done + val result = (TSVParser.name, many(TSVParser.SEPARATOR ~> TSVParser.name)) + .mapN(_ :: _) + .parse(baseHeader) + .done assertEquals(result, ParseResult.Done("", expect)) } @@ -62,99 +71,160 @@ class TSVParserSpec extends munit.FunSuite { test("parse rows correctly") { val csv = CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))) + CSV.Row( + NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Red"), + CSV.Field("Margarine"), + CSV.Field("2") + ) + ) ) ) val csvParse = "Blue\tPizza\t1\nRed\tMargarine\t2" - assertEquals(TSVParser.fileBody.parse(csvParse).done.either, Either.right(csv)) + assertEquals( + TSVParser.fileBody.parse(csvParse).done.either, + Either.right(csv) + ) } test("complete a csv parse") { val csv = CSV.Complete( CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList + .of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) ), CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))), - CSV.Row(NonEmptyList.of(CSV.Field("Yellow"), CSV.Field("Broccoli"), CSV.Field("3"))) + CSV.Row( + NonEmptyList + .of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList + .of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Yellow"), + CSV.Field("Broccoli"), + CSV.Field("3") + ) + ) ) ) ) val expectedCSVString = "Color\tFood\tNumber\nBlue\tPizza\t1\nRed\tMargarine\t2\nYellow\tBroccoli\t3" - assertEquals(TSVParser.`complete-file` - .parse(expectedCSVString) - .done - .either, Either.right(csv)) + assertEquals( + TSVParser.`complete-file` + .parse(expectedCSVString) + .done + .either, + Either.right(csv) + ) } test("parse a complete csv with a trailing new line by stripping it") { val csv = CSV.Complete( CSV.Headers( - NonEmptyList.of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) + NonEmptyList + .of(CSV.Header("Color"), CSV.Header("Food"), CSV.Header("Number")) ), CSV.Rows( List( - CSV.Row(NonEmptyList.of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1"))), - CSV.Row(NonEmptyList.of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2"))), - CSV.Row(NonEmptyList.of(CSV.Field("Yellow"), CSV.Field("Broccoli"), CSV.Field("3"))) + CSV.Row( + NonEmptyList + .of(CSV.Field("Blue"), CSV.Field("Pizza"), CSV.Field("1")) + ), + CSV.Row( + NonEmptyList + .of(CSV.Field("Red"), CSV.Field("Margarine"), CSV.Field("2")) + ), + CSV.Row( + NonEmptyList.of( + CSV.Field("Yellow"), + CSV.Field("Broccoli"), + CSV.Field("3") + ) + ) ) ) ) val expectedCSVString = "Color\tFood\tNumber\nBlue\tPizza\t1\nRed\tMargarine\t2\nYellow\tBroccoli\t3\n" - assertEquals(TSVParser.`complete-file` - .parse(expectedCSVString) - .done - .either - .map(_.stripTrailingRow) , Either.right(csv)) + assertEquals( + TSVParser.`complete-file` + .parse(expectedCSVString) + .done + .either + .map(_.stripTrailingRow), + Either.right(csv) + ) } test("parse an escaped row with a tab") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\tDog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\tDog"), + CSV.Field("Blue") + ) + ) val parseString = "Green\t\"Yellow\tDog\"\tBlue" - assertEquals(TSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + TSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } test("parse an escaped row with a double quote escaped") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\t \"Dog\""), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\t \"Dog\""), + CSV.Field("Blue") + ) + ) val parseString = "Green\t\"Yellow\t \"\"Dog\"\"\"\tBlue" - assertEquals(TSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + TSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } - - test("parse an escaped row with embedded newline") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\n Dog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\n Dog"), + CSV.Field("Blue") + ) + ) val parseString = "Green\t\"Yellow\n Dog\"\tBlue" - assertEquals(TSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + TSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } test("parse an escaped row with embedded CRLF") { - val csv = CSV.Row(NonEmptyList.of( - CSV.Field("Green"), - CSV.Field("Yellow\r\n Dog"), - CSV.Field("Blue") - )) + val csv = CSV.Row( + NonEmptyList.of( + CSV.Field("Green"), + CSV.Field("Yellow\r\n Dog"), + CSV.Field("Blue") + ) + ) val parseString = "Green\t\"Yellow\r\n Dog\"\tBlue" - assertEquals(TSVParser.record.parse(parseString).done.either, Either.right(csv)) + assertEquals( + TSVParser.record.parse(parseString).done.either, + Either.right(csv) + ) } -} \ No newline at end of file +} diff --git a/modules/refined/src/test/scala/io/chrisdavenport/cormorant/refined/RefinedSpec.scala b/modules/refined/src/test/scala/io/chrisdavenport/cormorant/refined/RefinedSpec.scala index e9509c0d..5732520d 100644 --- a/modules/refined/src/test/scala/io/chrisdavenport/cormorant/refined/RefinedSpec.scala +++ b/modules/refined/src/test/scala/io/chrisdavenport/cormorant/refined/RefinedSpec.scala @@ -1,6 +1,5 @@ package io.chrisdavenport.cormorant.refined - class RefinedSpec extends munit.FunSuite { test("be able to derive a put for a class") { import _root_.io.chrisdavenport.cormorant._ @@ -19,9 +18,12 @@ class RefinedSpec extends munit.FunSuite { // import eu.timepit.refined.string._ // import shapeless.{ ::, HNil } - val refinedValue : String Refined NonEmpty = refineMV[NonEmpty]("Hello") + val refinedValue: String Refined NonEmpty = refineMV[NonEmpty]("Hello") - assertEquals(Put[String Refined NonEmpty].put(refinedValue), CSV.Field("Hello")) + assertEquals( + Put[String Refined NonEmpty].put(refinedValue), + CSV.Field("Hello") + ) } @@ -33,10 +35,10 @@ class RefinedSpec extends munit.FunSuite { import eu.timepit.refined.api.Refined import eu.timepit.refined.collection.NonEmpty - val refinedValue : String Refined NonEmpty = refineMV[NonEmpty]("Hello") + val refinedValue: String Refined NonEmpty = refineMV[NonEmpty]("Hello") val csv = CSV.Field("Hello") assertEquals(Get[String Refined NonEmpty].get(csv), Right(refinedValue)) } -} \ No newline at end of file +} diff --git a/project/Libraries.scala b/project/Libraries.scala index d28747e9..a39bdbba 100644 --- a/project/Libraries.scala +++ b/project/Libraries.scala @@ -16,17 +16,18 @@ object Libraries { val AttoCore = "org.tpolecat" %% "atto-core" % AttoCoreVersion - val fs2Core = "co.fs2" %% "fs2-core" % fs2Version - val fs2IOTest = "co.fs2" %% "fs2-io" % fs2Version % Test + val fs2Core = "co.fs2" %% "fs2-core" % fs2Version + val fs2IOTest = "co.fs2" %% "fs2-io" % fs2Version % Test val Refined = "eu.timepit" %% "refined" % RefinedVersion val Shapeless = "com.chuusai" %% "shapeless" % ShapelessVersion - val CatsCore = "org.typelevel" %% "cats-core" % CatsVersion - val CatsKernel = "org.typelevel" %% "cats-kernel" % CatsVersion - val MUnitTest = "org.scalameta" %% "munit" % munitV % Test - val MUnitCatsEffectTest = "org.typelevel" %% "munit-cats-effect-3" % munitCatsEffectV % Test + val CatsCore = "org.typelevel" %% "cats-core" % CatsVersion + val CatsKernel = "org.typelevel" %% "cats-kernel" % CatsVersion + val MUnitTest = "org.scalameta" %% "munit" % munitV % Test + val MUnitCatsEffectTest = + "org.typelevel" %% "munit-cats-effect-3" % munitCatsEffectV % Test val ScalaCheckEffectMUnit = "org.typelevel" %% "scalacheck-effect-munit" % scalacheckEffectV % Test diff --git a/project/plugins.sbt b/project/plugins.sbt index 00d4bdb8..7ae90723 100644 --- a/project/plugins.sbt +++ b/project/plugins.sbt @@ -1,8 +1,8 @@ -addSbtPlugin("io.github.davidgregory084" % "sbt-tpolecat" % "0.3.1") -addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.4.6") -addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.10.0") -addSbtPlugin("com.47deg" % "sbt-microsites" % "1.3.2") -addSbtPlugin("com.typesafe" % "sbt-mima-plugin" % "1.1.0") -addSbtPlugin("org.portable-scala" % "sbt-scalajs-crossproject" % "1.2.0") -addSbtPlugin("org.scalameta" % "sbt-mdoc" % "2.3.2") -addSbtPlugin("com.github.cb372" % "sbt-explicit-dependencies" % "0.3.1") +addSbtPlugin("io.github.davidgregory084" % "sbt-tpolecat" % "0.3.1") +addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.4.6") +addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.10.0") +addSbtPlugin("com.47deg" % "sbt-microsites" % "1.3.2") +addSbtPlugin("com.typesafe" % "sbt-mima-plugin" % "1.1.0") +addSbtPlugin("org.portable-scala" % "sbt-scalajs-crossproject" % "1.2.0") +addSbtPlugin("org.scalameta" % "sbt-mdoc" % "2.3.2") +addSbtPlugin("com.github.cb372" % "sbt-explicit-dependencies" % "0.3.1") From 4fd20942c32f990d7625802c4ad4720beac811f5 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 09:44:44 -0400 Subject: [PATCH 10/15] Update spelling --- .github/actions/spelling/patterns.txt | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/.github/actions/spelling/patterns.txt b/.github/actions/spelling/patterns.txt index 62d765b8..1b9a6ef7 100644 --- a/.github/actions/spelling/patterns.txt +++ b/.github/actions/spelling/patterns.txt @@ -56,3 +56,9 @@ mailto:[-a-zA-Z=;:/?%&0-9+@.]{3,} # ignore long runs of a single character: \b([A-Za-z])\g{-1}{3,}\b + +# hit-count: 4 file-count: 1 +# Compiler flags (Unix, Java/Scala) +# Use if you have things like `-Pdocker` and want to treat them as `docker` +(?:^|[\t ,>"'`=(])-(?:(?:J-|)[DPWXY]|[Llf])(?=[A-Z]{2,}|[A-Z][a-z]|[a-z]{2,}) + From 7203936a8a1da30e0d148ce2e0d5be4589ac17e1 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 15:45:57 -0400 Subject: [PATCH 11/15] fix comment in no-ci --- .github/workflows/no-ci.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/no-ci.yml b/.github/workflows/no-ci.yml index a624036c..ef30b381 100644 --- a/.github/workflows/no-ci.yml +++ b/.github/workflows/no-ci.yml @@ -32,5 +32,5 @@ jobs: steps: - name: Stub gormorant Build run: | - echo "Files outside of the lighthouse-queues workflow have changed. This workflow has an equivalently named required check for those files, + echo "Files outside of the gormorant workflow have changed. This workflow has an equivalently named required check for those files, so this one exists to pass that check in the case that none of those files were changed." From 07c29cb47a60ddf4ff973aa5c4faaae3a8be4ffd Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 15:57:19 -0400 Subject: [PATCH 12/15] fix spacing in no-ci --- .github/workflows/no-ci.yml | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/.github/workflows/no-ci.yml b/.github/workflows/no-ci.yml index ef30b381..381a98ea 100644 --- a/.github/workflows/no-ci.yml +++ b/.github/workflows/no-ci.yml @@ -1,16 +1,16 @@ name: CI # If you update paths, make sure to update them in ci.yml as well on: - push: - branches: - - master - paths: - - .github/actions - - "README.md" - pull_request: - paths: - - .github/actions - - "README.md" + push: + branches: + - master + paths: + - .github/actions + - "README.md" + pull_request: + paths: + - .github/actions + - "README.md" permissions: contents: read From d4d6d0537cf83b5ca6f5db98db59e443fff40622 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 16:53:36 -0400 Subject: [PATCH 13/15] fix spelling again --- .github/actions/spelling/expect.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/actions/spelling/expect.txt b/.github/actions/spelling/expect.txt index 46311f7e..27beb298 100644 --- a/.github/actions/spelling/expect.txt +++ b/.github/actions/spelling/expect.txt @@ -26,6 +26,7 @@ jenkins labelledread labelledwrite mergify +Metaspace microsite munit nel From d1c423bb4b39f67069bbb9f3fc0dbbf9e0f14e74 Mon Sep 17 00:00:00 2001 From: rbbowd Date: Thu, 18 Apr 2024 16:59:29 -0400 Subject: [PATCH 14/15] bump version --- version.sbt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version.sbt b/version.sbt index 2de72619..d1cb4481 100644 --- a/version.sbt +++ b/version.sbt @@ -1 +1 @@ -ThisBuild / version := "1.0.0" +ThisBuild / version := "1.0.1" From 404ebd0426a1f64b712637cb40c5112bbfa5ffa5 Mon Sep 17 00:00:00 2001 From: Josh Soref <2119212+jsoref@users.noreply.github.com> Date: Thu, 18 Apr 2024 16:56:48 -0400 Subject: [PATCH 15/15] Remove microsite link (#2) --- .github/actions/spelling/expect.txt | 4 ---- README.md | 2 -- 2 files changed, 6 deletions(-) diff --git a/.github/actions/spelling/expect.txt b/.github/actions/spelling/expect.txt index 27beb298..04cdb77e 100644 --- a/.github/actions/spelling/expect.txt +++ b/.github/actions/spelling/expect.txt @@ -10,12 +10,10 @@ chuusai comple contramap davenverse -Defn Delims dquote Folat garnercorp -GCLOUD gcr google gormorant @@ -25,9 +23,7 @@ hnil jenkins labelledread labelledwrite -mergify Metaspace -microsite munit nel readlabelled diff --git a/README.md b/README.md index 7d368f77..ad333047 100644 --- a/README.md +++ b/README.md @@ -4,5 +4,3 @@ This is a fork of [davenverse/cormorant](https://github.com/davenverse/cormorant - removed direct dependency on Cats Effect - updated fs2 to 3.9.2 - -Head on over [to the microsite](https://garnercorp.github.io/gormorant)