How it works


The first layer of the platform is a Datalake (centralized file system) of the Amazon S3 type that allows you to centralize all the data from your different systems and tools. Production, marketing, sales data: feed Datakeen with all types of files. It is possible to import text data, tables, images or video and audio recordings. It is possible to load data from your workstation or to connect existing databases and SaaS tools to it.


The various data analysis projects are collaborative: a sharing system and a chat is available for collaborators. A system of roles and permissions controls the access and features that belong to different collaborators. A precise and nominative history of the transformations that have been carried out on the data and available through the interface.


Once the data has been aggregated and centralized, it is possible to implement advanced AI techniques for, among other things:
  • finely segment customers for marketing campaigns
  • identify and prevent breakdowns or failures in production processes
  • classify documents or emails automatically to save operational time
How to approach these different projects in a standard and unified way with the state of the art in terms of automatic learning method (Machine Learning)? How can these use cases be brought into production with an end-to-end pipeline tested, monitored and transparent once they have been experienced? Datakeen, based on container technology (Docker), has a resource orchestrator that allows it to move from experimentation to production with complete fluidity. The models are also optimized and monitored over time to ensure a high level of quality and incremental improvement over time.

Interested? Let's discuss your issues and ask for a demo.