The goal of this project was to develop an AI backend engine for an intelligent decision support system which asses an ischemic stroke risk. The system was developed to allow preventive interventions for patients with high risk of a stroke. In collaboration with a health insurer we collected historical electronic health record, social-demographic data, and quality of life related data to train and evaluate machine learning models.
The data was analyzed to find which aspects present in the collected datasets had the highest impact on the ischemic stroke risk.
The goal of this project was to develop an AI-backed intelligent decision support system assisting medical doctors to make decision about blood transfusion. In collaboration with multiple hospitals we collected historical data about blood transfusions, patients, and medical tests relevant to blood transfusion.
The data was analyzed to find which aspects present in the collected datasets had the highest impact on a decision if blood transfusion had been executed or not.
The goal of this project was to develop an AI backend for a FAQ chatbot. A client had collected significant amount of questions and answers that were used to train machine learning models. Besides text, the questions had assigned tags which were used to cluster questions info topic-based segments. The overall algorithm was trained as follows:
first we trained a topic classifier based on the assigned tags; for this we performed TF-IDF transformation and we trained an XGBoost classifier second, for each topic we built a Doc2Vec embeddings targeting only the specific topic.
The goal of this project was to develop an anomaly detection pipeline for business intelligence systems in order to trigger a BI report distribution - via slack or e-mail - when an anomalous situation was detected or send an alert to predefined groups of managers.
The system was designed to collect data online from various BI reporting systems (e.g., Tableau ). The data was later modelled as time series and stored in InfluxDB, following data was send to a stream processing engine called Kapacitor.