2014-09-02 by Stefan Urbanek
Finally it is here: Cubes 1.0. Many of you are already using it from
Github or from PyPi, it just has not
been officially released, so here we go.
Cubes now brings a light-weight way to create concept-oriented pluggable data
warehouse from multipe sources.
Summary:
- Analytical Workspace and Model Providers
- Model Objects Redesign
- HTTP API changes
- New Backends
- New SQL Backend Features
- Authentication and Authorization
Detailed list of changes.
The changes are major, backward incompatible, but necessary for the future
direction of the Cubes.
Analytical Workspace
The biggest change is the Workspace – pluggable data-warehouse. You are no
longer limited to one one model, one type of data store (database) and one set
of cubes. The new Workspace is now framework-level controller object that
manages models (model sources), cubes and datastores. To the future more
features will be added to the workspace.
- Multiple models per workspace/server instead of only one
- Multiple backends per workspace/server instead of only one
- Multiple data stores per workspace/server instead of only one
Models can now be generated or converted on-the-fly from another service with
the new concept of Model Providers.
See also:
Workspace,
Providers
Model Objects Redesign
Notable change is addition of new object: Measure Aggregate. Cubes now
distinguishes between measures and aggregates. measure represents a
numerical fact property, aggregate represents aggregated value (applied
aggregate function on a property, or provided natively by the backend). This
new approach of aggregates makes development of backends and clients much
easier. There is no need to construct and guess aggregate measures or
splitting the names from the functions. Backends receive concrete objects with
sufficient information to perform the aggregation (either by a function or
fetch already computed value).
Now you can name the "record_count" as you like or you might not have it at
all, if you do not like it.
More info about model can be found in the
model documentation.
Other model changes:
- cardinality - metadata that helps front-end to determine which kind of UI
item to use or might restrict hich-cardinality queries
- dimension linking – cubes can specify how the dimensions are going to be
linked: specify what hierarchies are relevant to the cube, how what is the
cardinality of dimension in the context of the cube and more.
- roles dimensions and levels can have roles – metadata that might make
dims/levels be handled in a special way. Currently only the
time
role is
implemented.
HTTP API Changes
The server end-points have changed. Concept of global model was dropped, now
there is just list of cubes. The front-end should approach the server in two
steps:
- Get list of cubes with
/cubes
– only basic information, no structure
metadata
- Get full cube model with
/cube/NAME/model
Other changes:
- Many actions now accept
format=
parameter, which can be json
, csv
or json_lines
(new-line separated JSON).
- Cuts for date dimension accepts named relative time references such as
cut=date:90daysago-today
- Dimension path elements can contain special characters if they are escaped
by a backslash such as
cut=city:Nové\ Mesto
More info
Backends
New backends:
- MongoDB (thanks to Robin Thomas)
- full implementation of the Slicer backend
- Mixpanel
- Google Analytics
With model providers you can easily create a backend for any other service
which serves cube-like data and plug it into your data warehouse.
SQL Features
Notable addition to the SQL backend are outer joins (finally!): three types of
joins were added to the SQL backend: match (inner), master (left outer) and
detail (right outer).
More info about the SQL
features.
Non-additive
Provisional semi-additive time dimension support was added. An aggregate can
specify that it is non-additive through the time dimension (such as account
amount snapshots) and the generated query will handle the situation by
choosing the latest entry used.
The initial metadata infrastructure is in place. More flexible implementation
that will include other dimensions and element selection functions will be
introduced in the future releases.
Credit goes to Robin Thomas for this feature.
Authentication and Authorization
Authentication at the server level and authorization at the workspace level.
The interface is extensible, so you can implement any method you like. The
built-in methods are pretty simple:
permissive authentication methods: pass-parameter – just pass api_key
parameter in the URL or Basic HTTP proxy – using username, ignoring password
(there is one for testing purposes called "adminadmin" ...)
authorization: JSON configuration for roles (inheritable) and rights.
The authorization has two parts: access to the cube and restriction cell for a cube.
More info about authorization
Creating an auth extension
Visualizer
Cubes comes with a built-in Visualizer – a web app for visualizing cubes data
over time series. Main features: drill-down, filtering, many chart options,
connects to any cubes server. The application was developed by Robin Thomas
and Ryan Berlew.
Instructions
About the Release
This release is a milestone in Cubes interface: the model metadata structure
and the slicer API. They are very unlikely to be changed, may be slighly
adjusted with maintaining backward compatibility or at least some kind of
conversion tools.
Things that might change, due to planned refactoring:
- Python interface – mostly Workspace and model compilation
- Localization – definition of model localization
- Extensions interface - which methods the extensions should implement and how
- Configuration – slight changes in the slicer.ini sections
The above changes will be stabilized around
v1.1
or v1.2 release.
To sum it up: it is safe to build applications on top of the Slicer/HTTP
interface and it is safe to generate models to be used by cubes.
Credits
Many thanks to Robin Thomas and Ryan
Berlew for major code contributions and for the
Visualizer. Credit also goes to
Jose Juan Montes,
Tomas Levine and
Byron Yi.
Links
Read the detailed list of changes.
Important note: The cubes repository has moved to the Data
Brewery github organization group (read
more).
Most recent sources can be found on github.
Questions, comments, suggestions for discussion can be posted to the
Cubes Google Group for discussion, problem solving and announcements.
Submit issues and suggestions
on github
IRC channel #databrewery on irc.freenode.net
2014-02-25 by Stefan Urbanek
Before the upcoming 1.0 release, I would like to introduce
Cubes core developer Robin
Thomas. Robin is experienced data warehouse engineer
with profound knowledge of OLAP and multidimensional modeling. Robin and his
team did a great job, contributed many new features and concepts.
We have quite a lot of new features and ideas thanks to Robin. Just to name a
few:
- new, completely rewritten Mongo backend
- authorization and authentication
- non-additive time dimension
- statistical functions
and many more.
Thanks and credit goes also to: Brad
Willard, Mathew Thomas, Ryan Berlew,
Andrew Bartholomew and Emily Wagner.
In addition, I would like to thank Squarespace for
their kindness and for contributing their additions back to the community as
open-source.
2014-02-20 by Stefan Urbanek
Here is a short presentation about the Cubes workspace changes:
Most recent Cubes sources can be found on github.
Read the development documentation.
2013-08-02 by Stefan Urbanek
Expressions is a lightweight arithmetic expression parser for creating simple
arithmetic expression compilers.
Goal is to provide minimal and understandable interface for handling
arithmetic expressions of the same grammar but slightly different dialects
(see below). The framework will stay lightweight and it is unlikely that it
will provide any more complex gramatical constructs.
Parser is hand-written to avoid any dependencies. The only requirement is
Python 3.
Source: github.com/Stiivi/expressions
Features
The expression is expected to be an infix expression that might contain:
- numbers and strings (literals)
- variables
- binary and unary operators
- function calls with variable number of arguments
The compiler is then used to build an object as a result of the compilation of
each of the tokens.
Dialects
Grammar of the expression is fixed. Slight differences can be specified using
a dialect
structure which contains:
- list of operators, their precedence and associativeness
- case sensitivity (currently used only for keyword based operators)
Planned options of a dialect that will be included in the future releases:
- string quoting characters (currently single
'
and double "
quotes)
- identifier quoting characters (currently unsupported)
- identifier characters (currently
_
and alpha-numeric characters)
- decimal separator (currently
.
)
- function argument list separator (currently comma
,
)
Use
Intended use is embedding of customized expression evaluation into an
application.
Example uses:
- Variable checking compiler with an access control to variables.
- Unified expression language where various other backends are possible.
- Compiler for custom object structures, such as for frameworks providing
functional-programing like interface.
How-to
Write a custom compiler class and implement methods:
compile_literal
taking a number or a string object
compile_variable
taking a variable name
compile_operator
taking a binary operator and two operands
compile_unary
taking an unary operator and one operand
compile_function
taking a function name and list of arguments
Every method receives a compilation context which is a custom object passed to
the compiler in compile(expression, context)
call.
The following compiler re-compiles an expression back into it's original form
with optional access restriction just to certain variables specified as the
compilation context:
class AllowingCompiler(Compiler):
def compile_literal(self, context, literal):
return repr(literal)
def compile_variable(self, context, variable):
"""Returns the variable if it is allowed in the `context`"""
if context and variable not in context:
raise ExpressionError("Variable %s is not allowed" % variable)
return variable
def compile_operator(self, context, operator, op1, op2):
return "(%s %s %s)" % (op1, operator, op2)
def compile_function(self, context, function, args):
arglist = ", " % args
return "%s(%s)" % (function, arglist)
Create a compiler instance and try to get the result:
compiler = AllowingCompiler()
result = compiler.compile("a + b", context=["a", "b"])
a = 1
b = 1
print(eval(result))
The output would be 2
as expected. The following will fail:
result = compiler.compile("a + c")
For more examples, such as building a SQLAlchemy structure
from an expression, see the examples folder.
Summary
Source: github.com/Stiivi/expressions
If you have any questions, comments, requests, do not hesitate to ask.
2013-06-22 by Stefan Urbanek
After a while of silence, here is first release of Bubbles – virtual data
objects framework.
Motto: Focus on the process, not the data technology
Here is a short presentation:
Introduction
I have started collecting functionality from various private data frameworks
into one, cleaning the APIs in the process.
Priorities of the framework are:
- understandability of the process
- auditability of the data being processed (frequent use of metadata)
- usability
- versatility
Working with data:
- keep data in their original form
- use native operations if possible
- performance provided by technology
- have options
Bubbles is performance agnostic at the low level of physical data
implementation. Performance should be assured by the data technology and
proper use of operations.
What bubbles is not?
- Numerical or statistical data tool. Use for example
Pandas instead.
- Datamining tool. It might provide data mining functionality in some sense,
but that will be provided by some other framework. For now, use
- All-purpose SQL abstraction framework. It provides operations on top of SQL,
but is not covering all the possibilities. Use Scikit Learn
SQLAlchemy for special constructs.
Data Objects and Representations
Data object might have one or multiple representations. A SQL table might act
as python iterator or might be composed as SQL statement. The more abstract
and more flexible representation, the better. If representations can be
composed or modified at metadta level, then it is much better than streaming
data all around the place.
Operations
Functionality of Bubbles is provided by operations. Operation takes one or
more objects as operands and set of parameters that affect the operation.
There are multiple versions of the operation – for various object
representations. Which operation is used is decided during runtime. For
example: if there is a SQL and iterator version and operand is SQL, then SQL
statement composition will be used.
Implementing custom operations is easy through an @operation
decorator.
I will be talking about them in detail in one of the upcoming blog posts.
Here is a list:
Bubbles (Brewery2) - Operations by Stefan Urbanek
Epilogue
Bubbles comes as Python 3.3 framework. There is no plan to have Python 2
back-port.
Bubbles will follow: 'provide mechanisms, not policies' rule as much as it
will be possible. Even some policies are introduced at the early stages of the
framework, such as operation dispatch or graph execution, they will be opened
later for custom replacement.
Version 0.2 is already planned and will contain:
- processing graph – connected nodes, like in the old Brewery
- more basic backends, at least Mongo and some APIs
- bubbles command line tool
Links
Sources can be found on github.
Read the documentation.
Join the Google Group for discussion, problem solving and announcements.
Submit issues and suggestions on github
IRC channel #databrewery on irc.freenode.net
If you have any questions, comments, requests, do not hesitate to ask.