The doors open at 08:30. The programme of the Big-Data.AI Summit starts with a two-hour block at the mainstage of hub.berlin. In the afternoon, we will have presentations, podium discussions and workshops on five parallel stages.
11 Apr 2019
11:00 — 11:20
AI | EN
11:20 — 11:40
Red Arena | Keynote
Ethics and Society
11:40 — 12:00
12:00 — 12:30
Red Arena | Panel
12:30 — 12:40
12:40 — 13:00
13:00 — 13:20
John McCarthy Stage
Retail and Logistics
Grace Hopper Stage
Hari Seldon Stage
Joan Clarke Stage
13:00 — 14:20
Workshop Stage | Workshop
13:20 — 13:40
Retail & Logistics
Big Data | EN
13:40 — 14:00
14:00 — 14:20
Big Data | DE
14:20 — 14:40
14:40 — 15:00
Basic | Advanced
15:00 — 15:20
AI | DE
Data Protection and Security
15:20 — 15:40
Digital Business Strategy
15:40 — 16:00
16:00 — 16:20
16:20 — 16:40
We have entered a new era of analytics with machine learning and artificial intelligence algorithms beginning to deliver on the long-promised advancement into self-learning systems. These approaches allow us to solve previously intractable problems with completely new attack plans. The appetite of deep learning algorithms for vast amounts of data and the ability to derive intelligence from diverse sets of noisy data allows us to go far beyond previous capabilities in what we used to call advanced analytics. However, to be successful we need to understand the capabilities and limitations of the new technologies. We also need to develop new skill sets in order to harness the power of deep learning to create business value in an enterprise.
Malicious (human) behavior has become digital: Credit card fraud, identity theft and system intrusion - just to name a few - belong to a new class of human caused malicious acts that occur at a scale and frequency never experienced before. Detecting these acts has become a crucial activity for any digitized business – but humans are no longer able to keep up. The industry has reacted to the increased demand for decision making and is extending their BI products with AI capabilities – often marketed as all-purpose AI solutions. Based on our experiences with projects in the financial industry we have built a live showcase for you related to identifying risky behavior in IT ticket data. The showcase focusses on building a prescriptive model to act against malicious activities in the city of Chicago. You will get to know our heuristics for tool-selections and how you can transition from aggregated visuals to your own sophisticated machine learning approach.
Accurate forecasts of the customer demand are key for a successful supply chain management. We apply deep feedforward neural networks to explore demand patterns in the sales time series of more than 1.000 products. The approach incorporates an automated model building, training and evaluation scheme. The forecasts are integrated into enterprise resource planning system of a Siemens factory. We benchmark the forecast accuracy of our approach with state-of-the-art machine learning methods.
AI enables us to use data to find answers to questions we couldn’t have answered before. The most critical question when talk about Energy is obviously what’s preventing us from living in a carbon-free society. In this talk, we analyze what is holding us back and present how we are using the power of the data to optimize our renewables, make our grids more intelligent, efficient and reliable, make people engaging more with energy, turn electro-mobility into a reality, help our homes, buildings and municipalities to save energy and much more. We will show that we walk-the-talk… AI is happening, is real and is transforming today, the energy world we will see tomorrow
Deep neural networks have become a key technology in domains like manufacturing, health care, or finance as they allow for predictions with high accuracy. However, there are many scenarios where highly accurate predictions alone are not enough and trust becomes crucial. Here, critical decisions must be complemented by explanations such that users are able to understand the results or the general behavior of the network. In this talk a practical approach for extracting information on the internal processes from neural networks is presented. For this purpose, simple decision trees are extracted from trained models that allow a user to understand the reasoning of the network. It is shown that simply fitting a decision tree to a learned model usually leads to unsatisfactory results in terms of accuracy and fidelity. Instead, it is demonstrated how to influence the structure of a neural network during training such that fitting a decision tree leads to significantly improved results.
We present the latest generation of an Inspirational AI - the Artificial Muse of Roman Lipski - , that is based on generative networks (GANs) and allows for an intuitive and fluid interaction between artist and AI. In our talk we will take a deep dive into the technical layer and share the learnings we made at “the inbetweens” of Roman and his Artificial Muse, of human and artificial intelligence. And this is just the beginning…
Intelligent enterprises connect human and artificial intelligence for all business divisions to automate repetitive tasks and improve the user experience for specific processes. Corporations that target best-in-class use of new technologies expect scalable innovation, agility and consistency as well as added value, deeper insights and easy integration into existing systems. Dr. Sebastian Wieczorek, Head of Leonardo Machine Learning Foundation, will explain how SAP helps corporations to deliver the Intelligent Enterprise with the help of machine learning. Get to know successful use cases for the intelligent processing of image, text, and speech data that can be applied in all lines of business and industries.
The Parts Alliance is one of the UK’s leading suppliers of automotive parts, supplying 30,000 different manufactured car parts to national and local independent garages and workshops. The Alliance consists of over 155 different branches across the UK, however they lacked a unified pricing system across this network: prices for parts were determined at the point-of-sale by each individual sales advisor. The Parts Alliance felt, therefore, that they were not making the best use of their data. In this talk we will tell about a five-week project aimed at making better use of The Parts Alliance data in order to create a dynamic pricing engine to maximise sales revenue. The result of this work could translate to an almost 30% revenue increase for the company, or up to £6 million. We will also discuss the practical concerns we have encountered when considering how-best to implement a large-scale overhaul of a central functionality in a large company.
Supported by the Horizon 2020 on Smart Cities and Communities (SCC), the Triangulum project shall demonstrate how a systems innovation approach based around the European Commission’s SCC Strategic Implementation Plan (SIP) can drive dynamic smart city development, to be tested across three lighthouse cities: Manchester, Eindhoven and Stavanger. The data hub provides access to data from the urban infrastructures and allowing for provisioning of value-added services on top of it. The data might be available as streams of real-time sensor data, as (static) raw data originating from various data providers, or as enriched data which has been extracted and improved out of the raw data and/or the real time sensor data with e.g. semantic relations or quality information. The Open Data and Service Engine consists of two layers: 1) the Analytics Layer that deals with real-time sensor data, raw data and corresponding enriched data, which are all in turn made available for the required smart city ICT services, and 2) the Service Layer that uses filtered and aggregated data and information from the Analytics Layer in order to provide corresponding services which are either used by urban managers, citizens and communities via Smart City Apps or that monitor, control and/or manage urban infrastructures, e.g. in order to assess, evaluate, and improve their quality. Data sharing has become a popular daily life activity all around the world. Data analysis may yield value in different aspects. Data driven services has foreseen potential in many sectors, for example energy, health, banking, insurance and transportation. However, violations of user privacy and digital rights management (DRM) in form of unintended data use, corporate applications and security breaches are being widely reported across multiple sources. The EU General Data Protection Regulation (GDPR) aims at protecting individual’s privacy. Cloud service providers dealing with EU citizen data must fully adopt to GDPR by 2018. However, accountability frameworks for distributed IT services is needed but still absent; hence it is difficult for users to understand, influence and determine how their service providers honor their obligations. It is important to support users in deciding and tracking how cloud service providers use their data. Blockchain and other distributed ledger technologies (DLTs), through recent development, enables not only transactions, but also smart contracts allowing complex computation on a network where parties are geographically distant or have no particular trust in each other to interact and exchange value and information on a fully distributed basis with fewer to non-existent central intermediaries. Our patented blockchain based decentralized and distributed technology proposes a novel on-the-fly dynamic control framework on shared data. The solution uniquely allows a user to trace, retract, remove and limit sharing of shared content. It gives the digital right and sharing control power back to the data creator, which is often considered as lost once it is shared today. It aims to create balance between data utility and privacy, thus creating a win-win situation between organizations and their customers.
The internet has changed the way we make decisions. However, the way executives make decisions has remained surprisingly unchanged. Every day, competitors are leaving behind online breadcrumbs filled with valuable external data - from hiring a new employee, to filing a new patent, launching a new product, online ad spend, and social media. Consumers and companies are producing online content at an unprecedented rate, creating a treasure trove of consumer insights and competitive intelligence. Today, thanks to AI and machine learning, we have the ability to monitor Porter's Five Forces in real time. As a result, the role of a business leader looks very different than it did in the past.Drawing on practical examples of transformative, data-led decisions made by leading global brands, Lyseggen will illustrate the future of corporate decision-making and offer a detailed plan for business leaders to implement this thinking into their company mindset and processes.
It is commonly known that big companies such as Amazon or Alibaba use AI in all areas in order to optimise internal processes and to provide an enhanced shopping experience to their customers. But as automatisation moves forward AI has become the key factor to establishing an advantage over competitors for any company engaging in E-Commerce. One of the most useful AI methods for the E-Commerce domain constitutes AI-based textual analysis. Use cases include the automatic generation of product information pages, content-based recommendations for the customer, and review analysis. Cornelia Werk presents specific applications of AI in E-Commerce and outlines the way textual analysis tools work and the opportunities they offer to companies.
The photovoltaic technology (PV) has spread all over the world on private household rooftops and contributes to a regenerative energy system. Here, a data-driven approach for the prediction of spatio-temporal evolution of PV is presented. Based on time series describing the past PV evolution as well as data on socioeconomics and households two neural nets are trained. Their output reveals detailed predictions on future spreading of PV. In contrast to standard Monte Carlo techniques this approach captures spatial correlations due to collective behavior. The resulting clusters are highly relevant in practice, i.e. for adequate planning of the distribution grid.
Big Data enables companies to develop algorithms enabling them to predict customer behavior. These algorithms are based on available data within business units and lead to a cascading set of challenges:
The solution of these problems - an easy to use knowledge graph – will be presented in this talk.
How are AI services used to automate processes? Visual Recognition is one of the biggest tools lately used to support quality testing processes. How are they trained and what challenges are we facing to deploy them in the daily process of the employee?
The landscape of big data applications is changing rapidly: large centralized datasets are replaced by high volume, high velocity data streams generated by a vast number of geographically distributed, loosely connected devices such as mobile phones, autonomous vehicles or industrial machines. Current parallel learning approaches are not designed for such highly distributed systems. Therefore, a new paradigm for parallelization is emerging that treats the learning algorithm as a black box, training local models and aggregating them into a single strong one. The approach is highly scalable, communication-efficient, and privacy-preserving, since it does not require to exchange local data. It enables novel applications by minimizing computation and communication costs, both highly relevant for autonomous driving and learning on mobile phones. It also allows to learn from privacy-sensitive data that is otherwise protected by corporate secrecy, such as sensor data from industrial machines.
In recent years Postbank started to optimize and automate many of its business processes through Big Data. From a speed up credit process to a central system where all accesses on personal customer data are stored for the GDPR. This talk deals with the different business cases where Big Data technologies are involved. It explains the basic data platform architecture and how technologies like Kudu, Kafka and Spark have changed and improved the business. Lastly, best practice pattern are introduced for designing a data lake and for ingesting and accessing data in the data lake.
Machine learning revolutionizes science, economy, and society. In science, pattern recognition by learning algorithms (e.g., in big data of life sciences and medicine) supports and sometimes replaces cognitive abilities of human scientists. Predictive analytics opens new avenues to predict human behavior for business strategies and precriming. But, machine learning is based on neural nets with exploding numbers of parameters which are often only trained and tuned to finalize desired results with big data. In this case, neural nets are black boxes with statistical procedures which miss causal explainability. But, without causal explainability of machine learning, clarification of responsibility is impossible. Explainable AI, however, is not only still at its beginning, but the question of responsibility transcends explainability as it leads ultimatively to commercial warranty and personal liability. A crucial example is the development of self-learning cars. The trust in provable and controllable software might contribute substantially to the acceptance of AI by the society, despite the inherent risks. Apparently there are still less regulations in software engineering which could help to identify responsibility. The challenges of responsibility need more research on the foundations of machine learning. Here we aim to get a sense for the necessary standards to be set for AI software.
We present AI-enabled technology to facilitate learning in corporate environments. The most common learning situation—a workshop—is designed to enable employees to deepen their knowledge of a topic. We developed AI-enabled technology to create intelligent ways to automate learning about AI. Miniature computers use facial recognition, location trackers and environment sensors to collect data from participants and objects in the room. We apply machine learning to interact with participants and improve their learning experience. To boost acceptance, we decided to combine all the presented technology in the form of an Exit Game. Using this concept, we combine the participants’ motivation to play with the goal to introduce intelligent automation.
We contend that a full-artificial intelligence is neither achievable, nor desirable. Instead we propose "Human-in-the-Loop Machine Learning" that places the business team at the core as both trainer and quality assurer of the ML feature of the product. We argue that we should concentrate more on the operationalization of the ML features and their integration into the service or the product. We present our ML development tools and process and demonstrate two use cases. We show that financial transactions can be used to predict life-changing events, and can be used as accurate next-best-offer tools for insurance products.
The Bitkom guidelines address policy makers and data protection authorities, consumers and developers, AI users, providers and corporate ethics officers. It contributes to the discourse on legal policy and provides guidelines for an ethical, responsible use of automated decisions and AI. The guideline also addresses transparency issues and regulatory approaches, thus creating confidence in the technology. The focus revolves around certain sectors such as health, human ressources, defense, banking etc. with the aim to help businesses implement and develop AI-technology and processes in a responsible way.
Big Data platforms enable you to collect, store and manage more data than ever before. But what's the point of it if it's not useful to your organisation?
People and companies that visualise their data are more than twice as likely to interact with and explore it. Tableau helps your people spot patterns in your data as they explore and ask questions of it in a visual way.
This session will show you how to bring your big data down to eye level, and discover critical insights when you make use of your most valuable assets. Your data and your people.
OpenText™ AI Augmented Capture is an innovative end-to-end solution that enables contextual understanding of data, ensuring documents are indexed, classified and routed appropriately and provides machine-initiated workflow. It automates tasks that usually require human understanding, decision-making and action, such as analyzing documents and archiving or routing them to right person or process. In this session you’ll learn best practices for cognitive capture of electronic and hard-copy documents. You’ll see an example of how simple it can be by snapping a picture directly from your smartphone. And you’ll learn how text analytics for content recognition allow for instant and intelligent document classification and routing.
Artificial Intelligence more and more complements or even replaces human decisions. This trend can be seen across all industries and includes both chances and risks for companies and citizens. Comprehensive governance of AI prerequisites, model development and deployment can mitigate the risks and increase the chances to generate positive business value. Ethical guidelines, e.g. avoidance of bias regarding customer gender, age or religion, should be established, implemented, controlled and detected problems corrected to avoid bias with negative impact. Examples for violation of or conformance with common ethical standards are increasingly discussed in the press and in public. Autonomous car accident decisions, life insurance approval based on selfies and AI disease recognition will be discussed.
Motivated by assessing the performance of mobile network services, P3 communications developed a Software Development Kit, which gathers a vast amount of data from millions of end user devices worldwide. Apart from its original purpose, P3 successfully used these data to answer exciting questions within the scope of diverse industry projects. This presentation will give an overview on selected projects and will point out some challenges arising when processing and analyzing the data in the context of such projects, including cleaning the data, dealing with missing features, and deriving statistically valid insights from the data. Furthermore, potential use cases for the data as well as new ideas to enhance the accuracy and applicability of the data using spatial statistics and machine learning methods will be discussed.
This session aims to shed light on the setup, work and lessons learned by the Data Science Nucleus of the European Central Bank. It is a flexibly build analytics centre of excellence, that combines business acumen together with deep data science and information technology knowledge. The goal of the ECB Data Science Nucleus is to help the bank reap the full benefit from AI technologies thus improving business processes. In the present session, main examples of delivered work together with the operational aspects are discussed.
Airline Data Intelligence - How to unlock the power of Data and New Technologies for Passengers and Employees alike In this session we will give an insight on how Lufthansa Airlines is utilizing its data assets and how new technologies such as machine learning, big data and analytics are leveraged to inrease the passenger experience and at the same time to empower our employees to take the right decisions at any time. AI, especially machine learning, promises to reveal the potential hidden in the data, both structured and unstructured, that we already have. First, however, it’s important to break silos of data and prepare it for AI processing. Our solution is terabyte-class big data systems, from which we build business data models accessible to AI. We will share our insights on the first AI based use cases we have in production and also on those we are currently working on.
Battery Production is a critical competence for e-mobility and at the same time difficult to predict in sense of Quality. This creates a high amount of scrap which is recognize very late in operational use and with that high cost. The presentation shows, how a combination of Industry 4.0, Machine Learning, Machine connectivity and Digital Twin helps to solve these issues with powerful insights and prediction capabilities. The resulting setup is a blueprint for industrial use in divers and dynamically changing real world environments.
The usefulness of AI makes it open for people to use it for malicious or self-serving purposes. Stakes are getting higher, and companies, especially those with access to vast pools of data, and ones that develop and deploy AI products and services, should demand more responsible work ethic from engineers, entrepreneurs, and executives. They must also establish more assertive boards and committees to make ethical AI their top strategic priority. For a technology designed to mimic human-like intelligence, understanding the darker side of AI is equally important, and probably requires defining the dos and don’ts at a policy level. We must take a democratic approach to the future of technology-led disruptions. The session will address real-life use cases of understanding parameters of bias, having a well-defined method for realisation of AI interventions, & algorithmically detecting the bias to make it self-correcting.
Data anonymisation has always been rather seen as a necessary evil instead of a helpful tool. Complex implementations, fall offs in data quality, ambiguity in legislation, missing certifications and many different (sometimes very theoretical) models make this topic difficult to access. That’s why plenty of myths have arisen around that technology over the years. This talk clears up with the most important misunderstandings. It will also explore how data anonymisation contributes to the protection of privacy in times of Advanced Machine Learning and Artificial Intelligence.
Maintenance interval every 20.000km, oil refill every 10.000km – is that really necessary? Already most of modern vehicles are connected to the manufacturers home base. But the collected data is mostly used for driver services in entertainment and navigation. AVL List and AVL Ditest focus on applications where vehicle data is utilized for failure prediction, predictive diagnostics and resulting feedback into engineering. We will present our solutions for infrastructure as well as prediction approaches and further use cases.
Innovation is driven by everyone: student and professor; inventor, researcher, developer; start-up, university, or corporation. All of these players need digestible, actionable, credible business insights in order to make informed, smart decisions. However, sifting through vast amounts of data and quickly identify the important bits and pieces of information is not tractable without the help of machine intelligence. We will present lessons learned from building a lean data warehouse that transforms information extracted from various sources into actionable innovation and technology intelligence: - Competitive landscape/business ecosystem: who are my peers, and how can I rate their impact on innovation? - Technology landscape: how is technological innovation pushed by research and industry, and pulled by public opinion? - Hot topics and trends: which are emerging, potentially disruptive technologies?
Dagmar Schuller, CEO and Co-Founder of audEERING, will demonstrate how artificial intelligence and machine learning are able to revolutionize the healthcare sector by advancing it to the digital era. With its sensAI technology, audEERING uses methods of machine intelligence and deep learning to recognizes 50 speaker conditions and emotions such as anger or fear based on the degree of arousal and valence as well as other essential characteristics of the human voice. Speech loss, mood swings and stress can be analyzed accurately and without any medical intervention. Even the shortest speech recordings of just a few seconds can provide fundamental insights into the diagnosis of neurocognitive diseases such as Alzheimer's, Parkinson's or depressions in real time. These can thus be detected in the early stages and during the symptom-free period.
Voith’s Division Digital Ventures play a vital role in driving the digital transformation and Industry 4.0. Bringing together all Voith’s knowledge and expertise in digitization and automation the focus is on developing new digital business solutions customers. During the Case-Study Presentation you will learn how Voith creates new customer services for the optimization of machinery across the entire life cycle by moving from digital to intelligent services. The new intelligent services solution provides a real digital twin of complex industry machines with all relevant data at the right time. The semantic network i-views enables Voith to create the digital twin by integrating all relevant information and data across various systems to build a harmonized semantic data model. Combined with augmented reality, it is possible to optimize maintenance efforts and processes significantly.
Anonymization is a key to a secure and accepted use of big data. Especially when it comes to the analysis of Mobile Network Data. The “Data Anonymization Platform” (DAP) of Telefónica NEXT is an effective tool for anonymization of large volumes of data. Through many years of development, the platform was supported by the German Federal Commissioner for Data Protection and Freedom of Information and was awarded the seal "Certified Data Privacy" by TÜV Saarland. Based on these anonymized, aggregated and extrapolated data Telefonica NEXT develops analytical solutions for all kinds of sectors which have a need for a constant and reliable view on the movement behavior of people at a large scale which are e.g. Retail, Tourism and Transport. By processing the mobile network data of Telefonica Germany Telefonica NEXT is able to deliver sound Origin-Destination-Matrices of people in combination with their mode of transport and other valuable criteria.
Everybody loves a great transformation story. But often forgotten once the transformation is complete are the challenges that are met along the way. The most common of these challenges is around the adoption of new technology – you want to do it, but you are scared of failing.
Jordan Barker from Alteryx will discuss the most common reasons enterprises fail in their journey to analytics transformation and will share how you can navigate pitfalls through dynamic and scalable solutions.
Join this session to learn how you can fail fast and come back as an analytics superstar.
In this talk, we want to report about the typical course of AI projects in companies within manufacturing branches. In almost all cases, the professional relationships started with a first request for a predictive maintenance solution and in all cases we started to interlace AI methods with the culture and DNA of the company. Although the buzzword „predictive maintenance“ is on everyone’s lips, there still are no real best practices in the market and we want to summarize the challenges during implementation of these use cases and how we answered them. Furthermore, we want to highlight subsequent use cases which are typically of interest for manufacturing companies.
As Artificial Intelligence is maturing an increasing number of systems is making the transition from research to product for clinical use. Currently there are no standards in place that regulators can use for the quality assessment of clinical AI systems. Most tools are approved on a case by case basis and are based on retrospective data. We need to implement standards in the evaluation of AI systems that ensure the quality of AI diagnoses and treatment recommendations. Possible elements of a regulatory quality framework to ensure patient safety are discussed.
Beyond "disruptive" ambitions of Fintechs, Blockchain or Cryptocurrencies there is a much more game-changing revolution in the backbone of digitization, related to Big-Data, Machine Learning and AI. This revolution comes up with very quiet and almost unnoticed upheaval of processes in business by converting ledger structures, information flows or documentation processes. And this Big Data revolution has made the recent AI advances possible. In two cases we demonstrate these transformations: The advanced use of "Big Data Analytics" and the establishment of standardized "Data Lakes". We will discuss the ethical area bridging the gap between law, general principles and innovative abandon. And we will show methods, frameworks and communication to set a new ethical awareness amongst data-science teams.
The role of Automation and AI in redefining the future of work is now strongly established. Driving the Enterprise Automation Journey in a collaborative ecosystem and measuring success through proven business outcomes are pivotal to the re-imagination of business and technology processes. In this session, we will discuss what it takes to lead this transformation. As experience speaks for itself, we will demonstrate some real-world use cases that substantiate the benefits that clients have derived through the Wipro HOLMES Artificial Intelligence Platform & its Ecosystem.
The Volkswagen AG is one of the leading car manufacturers in the world with thousands of employees, processes and IT systems to keep the company running. Although we are not a software company today, we realized that digitalization and especially Artificial Intelligence (AI) will have a large impact on our products and services but also on our ways of working. That’s why we decided to establish an AI strategy for our internal processes so that we use latest AI-related technologies to improve the company and motivate our employees for the related challenges and opportunities. In this presentation, we want to outline how we, as an industrial enterprise, defined our AI strategy and what the concrete goals are. We will also show how we are implementing this strategy and which obstacles we faced so far. By providing this transparency, we seek to establish a small community among traditional enterprises like ours to exchange best practices and drive the AI transformation forward.
Manufacturers need to deliver more and more products and product variations in ever shorter cycles to be competitive, which requires more efficient and cost-effective product design and assembly planning processes. This presentations shows how to automatically predict assembly times and assembly plans for new 3D product designs using machine learning, thereby accelerating the product design and assembly process, lowering costs, and increasing profit margins. While standard machine learning methods only process linear input vectors or data tables and deliver only atomic values, i.e. categories in the case of classification or numbers in the case of regression, here we are faced with complex hierarchical 3D product designs with hundreds or thousands of parts, enriched with textual data, as input data and have to deliver a sequence of assembly steps as an output. The presentation shows how to solve this complex task efficiently and accurately with machine learning. Validations at Daimler Trucks and Miele showed accurate predictions.
FASAS is the "Fraud And Security Analytics System" of Telekom Security, launched in 2017, built on the Cloudera Hadoop Ecosystem. The first operational use case was the detection of international voice fraud cases. This talk is about three new use cases in the field of Cyber Security on massive data sets. 1. Botnet Command & Control Center Detection with DNS Cache Misses 2. Blackhole Monitoring Cyber Threat Intelligence 3. Detection And Analysis Of Illegitimate Login Activities at Telekom Login All use cases are implemented using state-of-the-art Big Data & AI methods, providing new insights like clusterings and statistics. These can be used by Cyber Security and Fraud Detection experts at their convenience to explore anomalies deeper. In this way, AI methods assist the experts in their daily search for the ‘needle in the haystack’, automating tedious standard tasks and pointing at new anomalies worth investigating.