Helping organizations to optimize their work and increase incomes with data-driven and artificial intelligence solutions.
He has 20 years of experience in Computer Science both in industry and academia. He is highly oriented on providing solutions to increase incomes and to optimize costs with a simplest possible set of means for a problem (KISS paradigm), often with non-standard but efficient and effective approaches.
He has worked with a high variety of platforms and solutions involved in the whole data processing pipeline. From gathering, modeling, storing, data, through analyzing, finding insights in the data, up to building and deploying in production AI solutions. He also advises companies in building their AI-powered products or preparing for their data transformations.
Moreover, he has experience in building, coordinating, and leading work of data processing, AI, and software development teams in different startups, as an external expert and advisor. He was also supervising students towards their master’s and Ph.D. diplomas, and coordinating efforts of experienced scientists working on research projects.
He has experience in presenting results in public gained at top scientific conferences, as well as, giving academic lectures and mentoring younger colleagues. As an independent academic researcher, he had a chance to develop great time management and organizational skills.
PhD in Computer Science, 2015
University of Fribourg -- Switzerland
MSc in Computer Science, 2010
University of Lodz - Poland
MSc in Computer Science, 2009
Université Claude Bernard (Lyon I) - France
BSc in Computer Science, 2006
WIT under the auspices of Polish Academy of Sciences
The client was building a non-invasive glucose monitoring system. One of the first steps was to calibrate the physical device in order to transform the raw signal measurements to glucose levels. The preliminary work involved analyzing chronoamperometry data in order to find optimal parameters for the device and a way to calibrate the device.
This project aimed at providing means to compute how likely it is that two bitcoin addresses belong to the same entity.
The goal of this project was to develop a Generative Adversarial Network (GAN) capable of generating unique logotypes for teams on a gaming platform. The trained GAN architecture enabled the creation of 1M unique logotypes for the players of the platform.
This project aimed at analyzing customers’ bitcoin transactions in order to provide their liquidity estimation and spending analysis.
The client was facing trouble collecting and analyzing data on customers using their API platform. I was requested to research and develop a solution to enable data analytics and visualization on their platform. The delivered solution allowed my client to understand their clients’ behavior.
The goal of this project was to research and evaluate deep reinforcement learning methods to improve order placement in limit order markets.
This project aimed at analyzing bitcoin graph data and providing sophisticated information on bitcoin transaction data and aggregate on-chain statistics derived from graph data. The outcome statistics in the form of time series were later used in predictive machine learning models.
The goal of this project was to develop a Deep Learning powered solution detecting a disease on medical X-ray images. The model I have developed achieved an AUC of 0.98. The solution is part of a bigger system which is developed to assist medical doctors in their daily work.
The goal of this project was to develop a machine learning backed investment strategy based on multiple binary positive and negative signals sourced from price, market, and environmental analysis. The strategy I have developed performed 4x better than the baseline.
The goal of this project was to build an ETL system processing live online bitcoin transactional data and loading it to Neo4J graph database on the fly. The data included not only block chains but all details about transactions and interactions between addresses.
The goal of this project was to model blockchain historical data as a graph and to build an ETL system loading the entire data (blocks, transactions, inputs, outputs, addresses) into a Neo4j graph database. The solution I designed performed 10x faster than the best-reported system.
The goal of this project was to develop an AI backend engine for an intelligent decision support system which asses an ischemic stroke risk. The system was developed in collaboration with a health insurer to allow preventive interventions for patients with high risk of a stroke.
The goal of this project was to build an ETL system collecting and computing data about trades and trading positions for various cryptocurrencies. The data about trades and trading positions were integrated on the fly from multiple trading platforms via REST APIs and web socket.
The goal of this project was to develop a rules-based decision support system. The solution was assisting medical doctors to make a fast decision about blood transfusion. The project was conducted in collaboration with medical doctors as domain experts.
The goal of this project was to develop an AI-backed decision support system helping medical doctors to decide about blood transfusion. In collaboration with multiple hospitals, we collected historical data about blood transfusions, patients, and medical tests relevant to blood transfusion.
The goal of this project was to build an ETL system collecting, normalizing, and aggregating data about the popularity index (search volume) of various keywords from Google Trends. The system had two subsystems one for pulling historical data in bulk, the other for current data.
SolarData is a platform allowing to simulate the amount of energy produced by a photovoltaic energy system in a specified geographic location weather data and parameters of the photovoltaic system like number and type of solar panels, the angle of the installation, etc.
The goal of this project was to develop an AI backend for a frequently asked questions chatbot. A client had collected a significant amount of questions and answers that were used to train machine learning natural language processing models.
The goal of this project was to build an ETL system collecting and aggregating data about various crypto assets. The data about blocks and transactions on the crypto assets was fetched and integrated on-the-fly from multiple data sources via REST APIs.
The goal of this project was to develop an anomaly detection pipeline for business intelligence systems in order to trigger a BI report distribution - via slack or e-mail - when an anomalous situation was detected or send an alert to predefined groups of managers.
TripleProv is an in-memory RDF database capable to store, trace, and query provenance information in processing Linked Data queries. TripleProv returns an understandable description, at multiple levels of granularity, of the way, the results of a SPARQL query were derived.
dipLODocusRDF is a system for Linked Data data processing supporting both simple transactional queries and complex analytics efficiently. dipLODocusRDF is based on a hybrid storage model. DiploCloud is distributed version of dipLODocusRDF.
The key aspect of data science is to closely work with business stakeholders to understand their goals and determine how data can be used to achieve those goals. As output in this process, we want to make value out of data.
Data Scientist gathers information from various sources and analyzes it in order to understand how your business performs. Working with large data sets and using them to identify trends we reach meaningful conclusions to inform strategic business decisions. Having discovered knowledge in your data a Machine Learning Engineer leverages it to build Artificial Intelligence tools to automate certain processes within your company.
Data Engineer plays a key role in building data-driven solutions by maintaining data pipelines and databases for ingesting, transforming, and storing data. Data Engineer supports data science by connecting data from one source to another and transforming data from one format to another. Data Engineer builds data infrastructures connecting data ecosystems to make a data scientist’s work possible.
DevOps Engineer works with developers to facilitate better coordination among operations, development, and testing functions by automating the integration and deployment processes.
DevOps helps in:
Software Consultant advises clients on how to architect and develop large applications. Helping also to technically shape a dev team, the maintain software development and infrastructure ecosystem.
Software consulting covers the following areas: