Włodzisław Duch "Towards clinically useful biomarkers of mental disorders based on brain activity patterns"
Włodzisław Duch
"Towards clinically useful biomarkers of mental disorders based on brain activity patterns"
Nicolaus Copernicus University, Toruń, Poland
Link to CV: http://www.is.umk.pl/~duch/cv/cv.html
Abstract
The costs of mental disorders are a great burden on health systems.
Despite the concerted efforts of the machine learning community and the application of advanced experimental techniques such as functional magnetic resonance (fMRI), near-infrared spectroscopy (NIRS), and high-density EEG/MEG, we have yet to identify objective biomarkers that can be utilized in clinical settings. Ideally, we should be able to employ cost-effective methods, like basic EEG systems in a resting state, for the early and precise diagnosis of disorders such as autism, PTSD, and schizophrenia, among others.
This presentation will summarize the state-of-the-art in this field, delve into the challenges faced, issues with generalization, and the shortcomings of deep learning to provide effective features for the characterization of brain neurodynamics. A promising alternative proposed by our group combines EEG microstates with recurrence quantification and topological data analysis for a practical and potentially game-changing solution.
Duch W, Tołpa K, Ratajczak E, Hajnowski M, Furman Ł, Alexandre L.
(2023) Asymptotic spatiotemporal averaging of the power of EEG signals for schizophrenia diagnostics. Communications in Computer and Information Science (Springer Nature Singapore) 1963, pp. 428–439, 2024.
Furman Ł, Tołpa K, Minati L, Duch W. (2022) Short-Time Fourier Transform and Embedding Method for Recurrence Quantification Analysis of EEG Time Series. European Physical Journal Special Topics, 1-15
Erol Gelenbe "Random Neural Networks (RNN) for Accurate CyberAttack Detection and Mitigation at the Edge"
Erol Gelenbe
"Random Neural Networks (RNN) for Accurate CyberAttack Detection and Mitigation at the Edge"
Institute of Theoretical & Applied Informatics, Polish Academy of Sciences
&
University Côte d'Azur I3S CNRS, 06100 Nice, France
Erol Gelenbe FIEEE FACM FIFIP FRSS received his MS and PhD degrees at the NYU Tandon School that awarded him its Distinguished Alumnus Award in 2010. He has held named personal chairs at NJIT (USA), Duke University (USA), University of Central Florida (USA), Imperial College (UK), and full professorships at the University of Liege, University Paris Saclay, and University Paris Descartes. He served as Department Head at University Paris Descartes (1986-92), Duke University (1993-1998), and Director of the School of EECS at UCF (1993-1998). His research focuses on QoS, Security and Energy, and was funded by Industry, DoD and NSF in the USA, EPSRC and MoD in the UK, and numerous EU FP5, FP6, FP7, and Horizon 2020 projects since 2003; the IEEE Spectrum credits him with some of the inventions regarding fast many-to-many video connectivity that have led to systems such as Zoom (https://spectrum.ieee.org/erol-gelenbe-profile). Currently Professor at the Institute of Theoretical & Applied Informatics, Polish Academy of Sciences, since 2017, he also cooperates with the CNRS I3S Laboratory of University Côte d'Azur (Nice, France), and Yasar University (Izmir, Turkey), and his work is now supported by grants from H2020 Horizon and UKRI. He is ranked among the top 25 PhD advisors in mathematical sciences by the American Mathematical Society's Math. Genealogy Project, and he has graduated 24 women PhDs. Winner of the Grand Prix France Telecom 1996 (French Academy of Sciences), the ACM SIGMETRICS 2008 Life-Time Award, and the Mustafa Prize 2017, he was awarded the high honours of Commander of the Order of the Crown, Belgium (2022), Commander of the Order of Merit, France (2019), Knight of the Legion of Honour, France (2014), Commander of the Order of Merit, Italy (2005), Grand Officer of the Order of the the Star, Italy (2007). Fellow of the French National Academy of Engineering, and of the Science Academies of Belgium, Poland and Turkey, he is also Honorary Fellow of the Hungarian and Islamic Academy of Sciences. He currently chairs the Informatics Section of Academia Europaea.
Abstract
Even simple cyberattacks can impair the operation and performance of network systems substantially for many hours and sometimes days, and also increase the system's energy consumption. Their impact on data security, and the effects of the malware that they convey and install, are also well known. Thus there is a widespread need for accurate cyberattack detection, and rapid reaction and mitigation when attacks occur. On the other hand, the detection must avoid false alarms, to avoid impairing the smooth operation of a system which is not under attack. Thus considerable research has been conducted in this important field. Our presentation will briefly introduce the subject, and then focus on some recent results from the last 7-8 years, that are based on the Random Neural Network (RNN). The mathematical model will be described, and its extensions and deep learning algorithms will be discussed in the context of cyberattack detection and mitigation. The presentation will then focus on practical applications illustrated with different cyberattack datasets, showing the high accuracy and low false alarm rates that can be achieved. Measurements of active control schemes for attack mitigation will also be shown. Finally we will also show how the RNN can be used with Reinforcement Learning and SDN (Software Defined Networks), to dynamically control an Edge System that optimises Security, QoS and Energy Consumption. Note: The talk is based on our publications in the following journals and conferences: Proceedings of the IEEE (2020), Sensors (2021, 2023), ACM SIGCOMM Flexnets (2021), ICC (2016, 2022), IEEE Access (2022, 2023), Performance Evaluation (2022), IEEE Trustcom (2023).
Janusz Kacprzyk "AI-asssisted/enabled smart environments: from techno-centric to human-centric approaches"
Janusz Kacprzyk
"AI-asssisted/enabled smart environments: from techno-centric to human-centric approaches"
Fellow, IEEE, IET, EurAI, IFIP, IFSA, SMIA
Full member, Polish Academy of Sciences
Member, Academia Europaea
Member, European Academy of Sciences and Arts
Member, European Academy of Sciences
Member, International Academy for Systems and Cybernetic Sciences (IASCYS)
Foreign member, Bulgarian Academy of Sciences
Foreign member, Finnish Society of Sciences and Letters
Foreign member, Royal Flemish Academy of Belgium for Sciences and the Arts (KVAB)
Foreign member, Spanish Royal Academy of Economic and Financial Sciences (RACEF)
Systems Research Institute, Polish Academy of Sciences
Ul. Newelska 6, 01-447 Warsaw, Poland
Email: kacprzyk@ibspan.waw.pl
Google: kacprzyk
Janusz Kacprzyk is Professor of Computer Science at the Systems Research Institute, Polish Academy of
Sciences, WIT – Warsaw School of Information Technology, AGH University of Science and Technology
in Cracow, and Professor of Automatic Control at PIAP – Industrial Institute of Automation and
Measurements in Warsaw, Poland. He is Honorary Foreign Professor at the Department of Mathematics,
Yli Normal University, Xinjiang, China. He is Full Member of the Polish Academy of Sciences, Member
of Academia Europaea, European Academy of Sciences and Arts, European Academy of Sciences,
International Academy of Systems and Cybernetics (IASCYS), Foreign Member of the: Bulgarian
Academy of Sciences, Spanish Royal Academy of Economic and Financial Sciences (RACEF), Finnish
Society of Sciences and Letters, Flemish Royal Academy of Belgium of Sciences and the Arts (KVAB),
National Academy of Sciences of Ukraine and Lithuanian Academy of Sciences. He was awarded with 8
honorary doctorates. He is Fellow of IEEE, IET, IFSA, EurAI, IFIP, AAIA, I2CICC, and SMIA.
His main research interests include the use of modern computation computational and artificial
intelligence tools, notably fuzzy logic, in systems science, decision making, optimization, control, data
analysis and data mining, with applications in mobile robotics, systems modeling, ICT etc.
He authored 7 books, (co)edited more than 150 volumes, (co)authored more than 650 papers, including
ca. 150 in journals indexed by the WoS. He is listed in 2020 and 2021 ”World’s 2% Top Scientists” by
Stanford University, Elsevier (Scopus) and ScieTech Strategies and published in PLOS Biology Journal.
He is the editor in chief of 8 book series at Springer, and of 2 journals, and is on the editorial boards of
ca. 40 journals.. He is President of the Polish Operational and Systems Research Society and Past
President of International Fuzzy Systems Association
Wojciech Samek "Explainable AI for LLMs"
Wojciech Samek
"Explainable AI for LLMs"
Fraunhofer Heinrich Hertz Institute, Germany
Wojciech Samek is a professor in the department of Electrical Engineering and Computer Science at the Technical University of Berlin and is jointly heading the department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. He studied computer science at Humboldt University of Berlin from 2004 to 2010, was visiting researcher at NASA Ames Research Center, CA, USA, and received the Ph.D. degree in machine learning from the Technische Universität Berlin in 2014. He is associated faculty at the ELLIS Unit Berlin and the DFG Graduate School BIOQIC, and member of the scientific advisory board of IDEAS NCBR. Furthermore, he is a senior editor of IEEE TNNLS, an editorial board member of Pattern Recognition, and an elected member of the IEEE MLSP Technical Committee. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award, and part of the expert group developing the ISO/IEC MPEG-17 NNC standard. He is the leading editor of the Springer book “Explainable AI: Interpreting, Explaining and Visualizing Deep Learning” and organizer of various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. He has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed by Thomson Reuters as "Highly Cited Papers"" (i.e., top 1%) in the field of Engineering.
Abstract
Large Language Models are prone to biased predictions and
hallucinations, underlining the paramount importance of understanding
their model-internal reasoning process. However, achieving faithful
explanations for the entirety of a black-box transformer model and
maintaining computational efficiency is difficult. This talk will
present a recent extension of the Layer-wise Relevance Propagation
(LRP) attribution method to handle attention layers, which addresses
this challenge effectively. Our method is the first to faithfully and
holistically attribute not only input but also latent representations
of transformer models with the computational efficiency similar to a
singular backward pass. Since the LRP is a model-aware XAI method, it
not only identifies the relevant features in input space (e.g., pixels
or words) but also provides deep insights into the model’s
representation and the reasoning process. Through extensive
evaluations against existing methods on Llama 2, Flan-T5 and the
Vision Transformer architecture, we demonstrate that our proposed
approach surpasses alternative methods in terms of faithfulness and
enables the understanding of latent representations, opening up the
door for concept-based explanations.
Qiyu Sun "Toward Better Accuracy and Transferability in Self-Supervised Monocular Depth Estimation"
Qiyu Sun
"Toward Better Accuracy and Transferability in Self-Supervised Monocular Depth Estimation"
East China University of Science and Technology, Shanghai, China
Qiyu Sun received her B.S. degree in automation from the East China University of Science and Technology, Shanghai, China, in 2019. She is currently pursuing her Ph.D. degree in control science and engineering in the same university. From 2022 to 2023, she was a visiting Ph.D. student at Linkoping University, Sweden. Her research interests concern computer vision and deep learning, mainly focusing on depth estimation and semantic segmentation. She has authored or co-authored more than 20 papers in international journals and conferences, including TNNLS, TIE, TITS, ICCV, etc.
Abstract
Monocular depth estimation is a classical task in computer vision, which aims to estimate the distances between the objects in the environment and the agent itself. Recently, deep-learning-based depth estimation methods have obtained significant advancements. The self-supervised framework uses geometric constraints between images as the main supervisory signal and hence the training data is much more available. This talk will introduce the main accomplishments and recent developments in self-supervised monocular depth estimation. Then, the recent work performed by the speaker’s group will be presented, which focuses on improving the accuracy and transferability of the self-supervised monocular depth estimation model. Finally, the prospects for future development will be discussed.
Yang Tang "Distributed Optimization, Decision-making and Games in Multi-agent Systems"
Yang Tang
"Distributed Optimization, Decision-making and Games in Multi-agent Systems"
East China University of Science and Technology, Shanghai, China
Yang Tang received the B.S. and Ph.D. degrees in electrical engineering from Donghua University, Shanghai, China, in 2006 and 2010, respectively. From 2008 to 2010, he was a Research Associate with The Hong Kong Polytechnic University, Hong Kong. From 2011 to 2015, he was a Post-Doctoral Researcher with the Humboldt University of Berlin, Berlin, Germany, and with the Potsdam Institute for Climate Impact Research, Potsdam, Germany. He is now a Professor with the East China University of Science and Technology, Shanghai. His current research interests include distributed estimation/control/optimization, cyber-physical systems, hybrid dynamical systems, computer vision, reinforcement learning and their applications.
Prof. Tang is an IEEE Fellow. He was a recipient of the Alexander von Humboldt Fellowship. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, IEEE Transactions on Industrial Informatics, IEEE/ASME Transactions on Mechatronics, IEEE Transactions on Circuits and Systems-I: Regular Papers, IEEE Transactions on Cognitive and Developmental Systems, IEEE Transactions on Emerging Topics in Computational Intelligence, IEEE Systems Journal, Engineering Applications of Artificial Intelligence (IFAC Journal) and Science China Information Sciences, etc. He has published more than 200 papers in international journals and conferences, including more than 100 papers in IEEE Transactions and 20 papers in IFAC journals. He has been awarded as best/outstanding Associate Editor in IEEE journals for four times. He is a (leading) guest editor for several special issues focusing on autonomous systems, robotics, and industrial intelligence in IEEE Transactions.
Abstract
In the current era, multi-agent systems have witnessed rapid development, showcasing immense potential in domains such as energy-economic dispatch, machine learning, and collaborative operations of complex systems. This talk introduces the optimization and games in multi-agent systems. In the context of privacy protection and asynchronous computation, the talk explores reducing the communication load in distributed optimization through various event-triggered mechanisms. Considering the system's inherent uncertainties and the model’s interpretability in decision-making, the talk also delves into data-driven decision-making methods. Swarm game theory contains two primary categories: analytical games and reinforcement learning (RL)-based games. Analytical games focus on constructing mathematical models among swarm members, utilizing these models to devise effective strategies through thorough analysis. On the other hand, RL-based games prioritize the application of reinforcement learning techniques, emphasizing the role of trial and error in the strategic decision-making process to refine and perfect game strategies.
Michal Pluháček - tutorial "Advancements in Metaheuristic Design through Large Language Models: Exploring Challenges and Opportunities"
Michal Pluháček
Tutorial: "Advancements in Metaheuristic Design through Large Language Models: Exploring Challenges and Opportunities"
Tomas Bata University in Zlín, Czechia
Michal Pluhacek Received his Ph.D. degree in Information Technologies from the Tomas Bata University in Zlin, the Czech Republic in 2016 with the dissertation topic: Modern method of development and modifications of evolutionary computational techniques. Currently works as a junior researcher at the Regional Research Centre CEBIA-Tech of Tomas Bata University in Zlin. He is the author of many journal and conference papers on Particle Swarm Optimization and related topics. His research focus includes swarm intelligence theory and applications and artificial intelligence in general. In 2019, he finished six-months long research stay at New Jersey Institute of Technology, USA, focusing on swarm intelligence and swarm robotics. He became an assoc. prof. in 2023 after successfully defending his habilitation thesis on the topic "Inner Dynamics of Evolutionary Computation Techniques: Meaning for Practice."
Abstract
This tutorial discusses original research connecting generative AI and Evolutionary Computation (EC) fields. Through engaging demonstrations and discussion, the aim is to bring together experts from basic EC research, experts from different EC application areas, and experts or enthusiasts in Generative AI and large language models. The aim is to present a unique combination of techniques and deeper insights into the usability, prompting, and reasoning of the outputs of Generative AI and its impact on the design, selection, performance (improvements), efficiency, and understanding of EC techniques. Such research has become a vital part of science and engineering at the theoretical and practical levels.
The original contribution of this tutorial will be the presentation of the metaheuristics design procedure, either as a whole or as its individual parts, using various partial approaches, one-shot or iterative automated calls, specific prompt engineering and also together with a discussion of the possibilities of incorporating other features of the search space of the problem to be solved (optimized). The tutorial will conclude with reflections on the broader implications of integrating LLMs like GPT-4 in metaheuristic development. We'll discuss the potential future directions this research could take and how it might shape the evolution of algorithmic problem-solving in various domains.