CI | Test coverage(%) | Code quality | Stable version | ScalaDoc | Chat | Open issues | Average issue resolution time |
---|---|---|---|---|---|---|---|
With the rapidly evolving requirements, Cassandra releases, and competition, it was only natural we kept Phantom up to scratch. In line with a lot of user feedback, the priorities of 2.0.0 were:
-
Go back to the flexible licensing model everyone knows and loves(especially your legal department). No one wants to go through corporate litigation and licensing compliance to a
build.sbt
dependency, and if you've ever worked in a bank we all know it's not happening. -
Phantom was a really fun time saving introduction years ago when it was first introduced, but since then Scala has evolved to a point where many features of more esoteric components, such as the macro API, have reached a degree of stability that we can now exploit to our great advantage: boilerplate elimitation.
-
From type parameters to keys, table class cake patterns, having to define
fromRow
, and a whole lot of other boilerplatey items, we have eliminated them one by one, reducing the amount of code you need to type to make it all work. The future looks even brighter, as we plan on fully eliminating the mapping DSL very shortly in favour of even more lightweight techniques.
Feedback and contributions are welcome, and we are happy to prioritise any crucial features Phantom may currently be lacking.
- Revert all Outworkers projects and all their dependencies to the Apache V2 License.
- Publish
outworkers-util
and all sub modules to Maven Central. - Publish
outworkers-diesel
and all sub modules to Maven Central. - Drop all dependencies outside of
shapeless
anddatastax-java-driver
fromphantom-dsl
. - Remove all non standard resolvers from Phantom, all dependencies should build from JCenter and Maven Central by default with no custom resolvers required.
- Change all package names and resolvers to reflect our business name change from
Websudos
toOutworkers
. - Create a
1.30.x
release that allows users to transition to a no custom resolver version of Phantom 1.0.x even before 2.0.0 is stable.
- Replace the Scala reflection library with a macro that can figure out what the contents of a table are.
- Generate the name of a table using macros.
- Generate the primary key of a table using macros.
- Enforce primary key restrictions on a table using a macro.
- Generate the
fromRow
method ofCassandraTable
using a macro if thecase class
fields andtable
columns are matched. - Enforce a same ordering restriction for case class fields and table columns to avoid generating invalid methods with the macro.
- Generate the
fromRow
if the fields match, they are in arbitrary order, but there are no duplicate types. - Allow arbitrary inheritance and usage patterns for Cassandra tables, and resolve inheritance resolutions with macros to correctly identify desired table structures.
- Re-implement primitive types using native macro derived marshallers/unmarshallers.
- Re-implement prepared statement binds to use macro derived serializers.
- Add debug strings to
BatchQuery
. - Use
AnyVal
in theImplicitMechanism
where possible. - Enforce
store
method typechecking at compile time. - Use
shapeless.HList
as the core primitive inside table store methods. - Add advanced debugging to the macro API.
- Correctly implement Cassandra pagination using iterators, currently setting a
fetchSize
on a query does not correctly propagate or consume the resulting iterator, which leads to API inconsistencies andPagingState
not being set on anyResultSet
. - Add a build matrix that will test phantom against multiple versions of Cassandra in Travis for Scala 2.11, with support for all major releases of Cassandra.
- Bump code coverage up to 100%
- Native support for multi-tenanted environments via cached sessions.
- Case sensitive CQL.
- Materialized views.(phantom pro)
- SASI index support
- Support for
PER PARTITION LIMIT
inSelectQuery
. - Support for
GROUP BY
inSelectQuery
. - Implement a compact table DSL that does not require passing in
this
to columns.
- Add support for Scala 2.12 in the
util
library, remove all dependencies that don't comply. - Add support for Scala 2.12 in the
diesel-engine
. - Add support for Scala 2.12 in
phantom-dsl
- Add support for Scala 2.12 in
phantom-connectors
- Add support for Scala 2.12 in
phantom-example
- Add support for Scala 2.12 in
phantom-streams
- Add support for Scala 2.12 in
phantom-thrift
- Add support for Scala 2.12 in
phantom-finagle
- Migration guide for transitioning to Phantom 2.0.0. Guide here.
- Move documentation back to the docs folder.
- Add a documentation website on the main page.
- Create a navigator that allows viewing the documentation at a particular point in time.
As a word of introduction, this guide is brand new and there may be certain elements we have currently left out. Phantom has an immense adopter base which includes many of you using the library in ways which we do not know of. 2.0.0 completely replaces fundemental aspects of the framework to provide superior performance and reliability, and we have tested back and forth to ensure the smoothest possible transition, but please feel free to report any issues via GitHub and we will fix them straight away.
- The OSS version of phantom has as of 2.0.0 returned to the Apache V2 license and the license is here to stay.
- All packages and dependencies are now available under the
com.outworkers
organisation instead ofcom.websudos
. As part of long term re-branding efforts, we have finally felt it's time to make sure the change is consistent throughout. - There is a new and now completely optional Bintray resolver,
Resolver.bintrayRepo("outworkers", "oss-releases")
, that gives you free access to the latest cuts of our open source releases before they hit Maven Central. We assume no liability for your usage of latest cuts, but we welcome feedback and we do our best to have elaborate CI processes in place. - Manually defining a
fromRow
inside aCassandraTable
is no longer required if your column types match your case class types. EnumColumn
is now relying entirely onPrimitive.macroImpl
, which means you will not need to pass in the enumeration as an argument toEnumColumn
anymore. This meansobject enum extends EnumColumn(this, enum: MyEnum)
is now simplyobject enum extends EnumColumn[MyEnum#Value](this)
- All dependencies are now being published to Maven Central. This includes outworkers util and outworkers diesel, projects which have in their own right been completely open sourced under Apache V2 and made public on GitHub.
- All dependencies on
scala-reflect
have been completely removed. - A new, macro based mechanism now performs the same auto-discovery task that reflection used to, thanks to
macro-compat
. - Index modifiers no longer require a type parameter,
PartitionKey
,PrimaryKey
,ClusteringOrder
andIndex
don't require the column type passed anymore. KeySpaceDef
has been renamed to the more appropiateCassandraConnection
.CassandraConnection
now natively supports specifying a typed keyspace creation query.TimeWindowCompactionStrategy
is now natively supported in the CREATE/ALTER dsl.- Collections can now be used as part of a primary or partition key.
- Tuples are now natively supported as valid types via
TupleColumn
. phantom-reactivestreams
is now simply calledphantom-streams
.Database.autocreate
andDatabase.autotruncate
are now no longer accessible. Usecreate
,createAsync
,truncate
andtruncateAsync
instead.Database
now requires an f-bounded type argument:class MyDb(override val connector: CassandraConnection) extends Database[MyDb](connector)
.- Automated Cassandra pagination via paging states has been moved to a new method called
paginateRecord
. UsingfetchRecord
with aPagingState
is no longer possible. This is done to distinguish the underlying consumer mechanism of parsing and fetching records from Cassandra. com.outworkers.phantom.dsl.context
should be used instead ofscala.concurrent.ExecutionContext.Implicits.global
. This now has the typeExecutionContextExecutor
which allows us to use the same context for both Scala and Java futures(which are used internally as part of the Datastax Java driver).
Instead of com.websudos.phantom.Implicits._
, you need to import com.outworkers.phantom.dsl._
. It's
also worth paying attention that if you're using any phantom after 2.14.0, you are also required to have
this import in a lot more places than before.
This is because the return type of the query methods is now tied to a specific package import.
This was done to allow us to have uniform method names across modules like phantom-dsl
and phantom-finagle
.
In the future, this is likely to be replaced with Free monads and cats.free.Free
, but so far
we have resisted adding new large dependencies such as Cats for any reason.
To understand more about this, have a look at execution backends.
As of phantom 2.4.0, phantom is capable of automatically generating a Row
extractor for the majority of
use cases using implicit macros, meaning you will never again need to define that part of the boilerplate.
For more details, you can refer to the how extractors work guideline in the documentation.
As of phantom 2.5.0, if you have a manually defined method to insert records into your table, this is now no longer necessary.
For a full set of details on how the store
method is generated, refer to the store method docs.
This is because phantom successfully auto-generates a basic store method that looks like this below.
import scala.concurrent.duration._
import scala.concurrent.Future
import com.outworkers.phantom.builder.query.InsertQuery
import com.outworkers.phantom.dsl._
case class Record(
id: UUID,
name: String,
firstName: String,
email: String
)
abstract class MyTable extends Table[MyTable, Record] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
object firstName extends StringColumn
object email extends StringColumn
// Phantom now auto-generates the below method
def store(record: Record): InsertQuery.Default[MyTable, Record] = {
insert.value(_.id, record.id)
.value(_.name, record.name)
.value(_.firstName, record.firstName)
.value(_.email, record.email)
}
// you can trivially extend the default insert method
// and add more clauses or features to it.
def newRecord(record: Record): Future[ResultSet] = {
store(record)
.ttl(5.minutes)
.consistencyLevel_=(ConsistencyLevel.ALL).future()
}
}