He has almost 20 years of experience in Computer Science both in industry and academia. He is highly oriented on providing solutions to increase incomes and to optimize costs with a simplest possible set of means for a problem (KISS paradigm), often with non-standard but efficient and effective approaches.
He has worked with a high variety of platforms and solutions involved in the whole data processing pipeline. From gathering, modeling, storing, data, trough analyzing, finding insights in the data, up to building and deploying in production AI solutions. He also advises companies in building theirs AI powered products or preparing for their data transformations.
Moreover, he has experience in building, coordinating, and leading work of data processing, AI, and software development teams in different startups, as an external expert and advisor. He was also supervising students towards their master and PhD diplomas, and coordinating efforts of experienced scientists working on research projects.
He has experience in presenting results in public gained at top scientific conferences, as well as, giving academic lectures and mentoring younger colleagues. As an independent academic researcher, he had a chance to developed great time management and organizational skills.
PhD in Computer Science, 2015
University of Fribourg -- Switzerland
MSc in Computer Science, 2010
University of Lodz - Poland
MSc in Computer Science, 2009
Université Claude Bernard (Lyon I) - France
BSc in Computer Science, 2006
WIT under the auspices of Polish Academy of Sciences
This project aimed at analysing bitcoin graph data and providing sophisticated information on bitcoin transaction data and aggregate on-chain statistics derived from graph data. The outcome statistics in the form of timeseries were later used in predictive machine learning models.
The goal of this project was to develop a Deep Learning powered solution detecting a disease on medical X-ray images. The model I have developed achieved AUC of 0.98. The solution is part of a bigger system which is developed to assist medical doctors in their daily work.
The goal of this project was to develop a machine learning backed investment strategy based on multiple binary positive and negative signals sourced from price, market, and environment analysis. The strategy I have developed performed 4x better than the baseline.
The goal of this project was to build an ETL system processing live online bitcoin transactional data and loading it to Neo4J graph database on the fly. The data included not only block chains but all details about transactions and interactions between addresses.
The goal of this project was to model blockchain historical data as a graph and to build an ETL system loading the entire data (blocks, transactions, inputs, outputs, addresses) into a Neo4j graph database. The solution I designed performed 10x faster than the best reported system.
The goal of this project was to develop an AI backend engine for an intelligent decision support system which asses an ischemic stroke risk. The system was developed in collaboration with a health insurer to allow preventive interventions for patients with high risk of a stroke.
The goal of this project was to build an ETL system collecting and computing data about trades and trading positions for various cryptocurrencies . The data about trades and trading positions were integrated on-the-fly from multiple trading platforms via REST APIs and web socket.
The goal of this project was to develop a rules-based decision support system. The solution was assisting medical doctors to make fast decision about blood transfusion. The project was conducted in collaboration with medical doctors as domain experts.
The goal of this project was to develop an AI-backed decision support system helping medical doctors to decide about blood transfusion. In collaboration with multiple hospitals we collected historical data about blood transfusions, patients, and medical tests relevant to blood transfusion.
The goal of this project was to build an ETL system collecting, normalizing, and aggregating data about popularity index (search volume) of various keywords from Google Trends. The system had two subsystems one for pulling historical data in bulk, the other for current data.
SolarData is a platform allowing to simulate the amount of energy produced by a photovoltaic energy system in a specified geographic location weather data and parameters of the photovoltaic system like number and type of solar panels, angle of the installation, etc.
The goal of this project was to develop an AI backend for a frequently asked questions chatbot. A client had collected significant amount of questions and answers that were used to train machine learning natural language processing models.
The goal of this project was to build an ETL system collecting and aggregating data about various cryptoassets. The data about blocks and transactions on the cryptoassets was fetched and integrated on-the-fly from multiple data sources via REST APIs.
The goal of this project was to develop an anomaly detection pipeline for business intelligence systems in order to trigger a BI report distribution - via slack or e-mail - when an anomalous situation was detected or send an alert to predefined groups of managers.
TripleProv is an in-memory RDF database capable to store, trace, and query provenance information in processing Linked Data queries. TripleProv returns an understandable description, at multiple level of granularity, of the way the results of an SPARQL query were derived.
dipLODocusRDF is a system for Linked Data data processing supporting both simple transactional queries and complex analytics efficiently. dipLODocusRDF is based on a hybrid storage model. DiploCloud is distributed version of dipLODocusRDF.
The key aspect of data science is to closely work with business stakeholders to understand their goals and determine how data can be used to achieve those goals. As an output in this process we want to makes value out of data.
Data Scientist gather information from various sources and analyzes it in order to understand how your business performs. Working with large data sets and using them to identify trends we reach meaningful conclusions to inform strategic business decisions. Having discovered knowledge in your data a Machine Learning Engineer leverages it to build Artificial Intelligence tools to automate certain processes within your company.
Data Engineer plays a key role in building a data-driven solutions by maintaining data pipelines and databases for ingesting, transforming, and storing data. Data Engineer supports data science by connecting data from one source to another and transforming data from one format to another. Data Engineer builds data infrastructures connecting data ecosystems to make a data scientist’s work possible.
DevOps Engineer works with developers to facilitate better coordination among operations, development and testing functions by automating the integration and deployment processes.
DevOps helps in:
Software Consultant advises clients on how to architect and develop large applications. Helping also to technically shape a dev team, the maintain software development and infrastructure ecosystem.
Software consulting covers the following areas: