Engineering Blog

This month’s update will focus solely on some key functionality of the O2MC I/O Platform. As of this month it’s possible to use Python as server side coding language. This allows even more options then the already available Groovy language mostly because a lot of Python code examples can be found online and a big community has created an extensive amount of Python libraries already.

This capability illustrates the typical place and use of the O2MC I/O platform in an ecosystem. It is not our intention to replace Python or provide similar functionality in a different way (with different code). Instead the O2MC I/O platform uses Python code blocks as well as code blocks which execute individually and in isolation and distribute the results to other components according to the specification in the DimML file. I often compare our technology to being cement and having the capability of using bricks to create a wall. The more powerful the bricks, the significantly powerful walls that can be built.

The O2MC I/O platform is built on Java and the way the Python coding works is that the Python code blocks get interpreted and translated to Java code using Jython. The challenged we fased we integration Jython is that we did not want to allow all possible code constructs to be used. For instance reading a file is relatively harmless when working with a local, open source variant of Python on your laptop, but can be harmfull when reading files from our cloud based environment. We feel the balance between these limitations and all the possibilities of the Python have resulted in unique possibilities. Not only can the Python code be used and scale automatically, it can talk natively to core platform component for changing content, processing APIs and distributing data.Python has become popular in the domain of specific machine learning algorithms which would be a good supplement to the typical use of the O2MC I/O platform. The platform is aimed at real time event processing. This means the platform is designed to process the request very fast and process them in a real time fashion. I see a lot of companies designing advanced learning and personalization algorithms, but lacking the power to apply it to their customers and end users in real time. This is where we have a unique capability, since the SLOC (and similar variants for other data sources) provide a bidirectional connection to the end user’s content and its personalization. Especially with the adding of Python for implementing recommendations, it’s relatively easy to create a good recommender quickly.

To complete this post, an example DimmL file which is triggered from a website to call a Python Fibonacci function, which in turn calls a Groovy function while the fields in the data tuple are processed with (server side) Javascript and Groovy.

Take a look under the hood …

Click to see the code
concept Global {

match '*'
val url = `location`

=> code[msg = `fib(MAX_FIB)`]
=> code[class = `getClass(msg)`]
=> code[fromGroovy = `fib(MAX_FIB)`@groovy]
=> code[fromJavascript = `fib(MAX_FIB)`@javascript]
=> console:server
const MAX_FIB = `100`
def fib = {max => `
a,b = 0,1
result = []
while b < max:
result = add(result,b)
a,b = b,a+b
return result`}

def add = {list, value => `list<<value`}
def getClass = {o => `o.getClass().getName()`}
#mk-accordion-58b364ecd0415 .mk-accordion-pane{ background-color: #ffffff; }
360 degreesiStock_000012107866_XXXLarge-001_1024