Vahe Andonians' main interest is the application of Artificial Intelligence in processing financial information, in particular in the form of PDF documents. Oftentimes reported financial information is contained in tables and charts which hence require a graphical classification before the information can be extracted and processed.
As part of this effort, he is also working on word embedding especially for the financial industry.
Vahe is a serial entrepreneur, who has successfully founded and sold businesses in the past. In 2017 he sold his latest startup - SCDM, an AI-based analytics company specialized in the fixed income market, to Deloitte. Prior to that Vahe had sold a structured credit business to Moody's Analytics.
Sebastian Ebert’s research focuses on the psychology behind financial and other risky decisions, thereby contributing to the growing fields of behavioral economics, finance, and insurance. For example, he investigates the impact of human biases on asset prices or on individual investors’ investment success. Recent work analyzes the importance of non-standard risk- and time preferences on decision-making. Methodologically, Ebert relies on analytical modeling as well as on experiments. Much of his work is interdisciplinary and merges insights from economics, statistics, mathematics, and psychology.
Professor Florian Ellsäßer’s main interest lies in inference under uncertainty. On the theoretical side his focus has been on research methodology, analyzing the applicability of different philosophical theories such as contrastive explanation and machine learning methods such as Causal Graph Models to management research. On an empirical level Florian is working on modelling and experimentally testing human information search in the context of different levels of complexity and uncertainty. Furthermore, he uses empirical data to make behavioral predictions such as how a debate will influence public opinion before it is finished or how mouse-flow data from websites can predict behavioral characteristics such as whether a person is searching for a particular product or looking for inspiration. Methodologically, Florian mainly relies on machine learning techniques such as LSTM neural nets and neural networks that combine sinusoidal transformations with traditional deep nets. Furthermore, he applies natural language processing techniques, particularly in the context of argument extraction.
Political polarization abounds, affecting the workings of our societies. What are the ingredients to opinion polarization? Are there cases where polarized communities can nevertheless reach a consensus. Professor Rainer Hegselmann’s internationally recognized work on opinion dynamics addresses these fundamental questions through the lens of computational social science. Professor Hegselmann’s work explores possible causes of group polarization, which naturally leads to questions about how to prevent groups from polarizing, but also to how societies and social networks may defend against actors who may wish to encourage group polarization within a network or society at large.
Ronald researches decision-making under uncertainty, particularly as regards resource allocation problems in innovation and strategy. He is interested in firms’ calculus for deliberate decisions to learn, be it about the viability of a new product development project or the attractiveness of a new market. In experiments as well as field work, Ronald examines firms’ preferences for commitment and flexibility as they charter different degrees of commercial uncertainty. Building on a background in strategy consulting, Ronald works predominantly with technology firms, especially device manufacturers and service providers in the telecoms space.
Jan Nagler's research includes the understanding and control of networked stochastic systems, with applications at the interface between physics, biology, and socio-economic systems, in particular game theory and phase transitions, ergodicity breaking and risk and survival in uncertain environments.
He is the Head of the Deep Dynamics Group, which aims for a deep understanding of real-world system dynamics in finance, economics, ecology, physics and socio-technological systems, in particular networks and networked socio-economic systems.
The group tackles problems by combining cross-disciplinary explanatory (based on deep fundamental underlying mechanisms and principles) and predictive modelling (prediction using deep learning), while being deeply committed to an ethically aligned design of systems, in particular explanatory and socially responsible AI.
Our research portfolio ranges from moral algorithms of autonomous cars, fluctuation-induced transitions in social systems and design of digital systems and use of Big Data following homeokinetic principles to the design of fluctuations to resolve conflicts.
In the field of passive portfolio management, index tracking techniques are widely used. Many of the algorithms and processes used to build tracking portfolios lack stability. To overcome this problem, regularization techniques are applied, but these are primarily mathematical considerations in order to achieve desired properties of the tracking portfolios. The team around Professor Peter Roßbach developed a method based on machine learning to incorporate stability by introducing preliminary information into the tracking process. They constructed a general constraint type that can be adapted to different types of information even of economic nature.
Francesco Sangiorgi’s research focuses on the economics of information as applied to financial markets. A common theme in his research is the process by which the beliefs of market participants are formed: What information agents choose to observe, how this information is learned, and how this information impacts market outcomes. Some examples of his work include the study of asset and information markets under Knightian uncertainty (ambiguity) and overconfidence, optimal information sale and disclosure in financial markets, and information search frictions.
Theoretical finance is currently using a rich set of mathematical tools to model dynamics market systems, however, in practice we are still stuck in the stone age (dated from 30 to 50 years back from now) and routinely employ linear models to identify various sources of risk, exposure to risks, and the respective risk premiums in both corporate finance and capital markets environment. Non-linear models are disliked due to their tendency to overfit the reality and produce spurious projections. We are interested in using deep machine learning methods to model specific economic environment and to offer an alternative framework for analysis and decision-making in investment and risk management. Note that it is important not just to create a model, and to make a prediction, but to find appropriate statistical criteria that quantify the significance of the prediction and of the model fit.
Jens Witkowski's research focuses on the intersection of data science and economics, with an emphasis on eliciting, aggregating, and evaluating crowd-sourced information. An example setting is forecasting, where humans or machines provide probabilistic estimates regarding the possible outcomes of a future event (such as the winner of a political election). In this context, Jens developed algorithms that identify the most accurate forecasters while ensuring that all forecasters have proper incentives for information acquisition and truthful reporting. Drawing mostly on statistical learning and game theory, his research objective is to develop principled, mathematical methods with real-world impact. For example, development economists from MIT and Harvard Business School recently applied his Robust Bayesian Truth Serum in the field in India, where they use it to elicit community information on entrepreneurial ability.