Session Information
Cluster 3
AI for Scientific and Social Challenges
Co-Chairs
Eunok Paek, Jinsul Kim, Ji Hoon Kim
Description
Discover how AI is transforming scientific problem-solving and addressing societal challenges. Experts will discuss AI’s impact in space observation, protein structure, and energy solutions. Topics include brain-computer interfaces, language model training, and ethical considerations. This session fosters innovation at the intersection of AI technology and societal impact.
# AI in Scientific Innovation
# Ethical AI for Societal Impact
Program
Day 1 (December 5) | ||
---|---|---|
10:00~11:05 | Chair: Eunok Paek | |
Human-AI Alignment for Speech Brain-Computer Interfaces | Gopala Krishna Anumanchipalli (UC Berkeley) |
|
Brain-to-speech: Decoding Intention and Generating Speech from Brain Signals | Seong-Whan Lee (Korea U) |
|
14:40-16:00 | Chair: Jinsul Kim | |
You want to be able to Train Language Models Yourself | Kyunghyun Cho (New York U) |
|
Advancing Science and Society with AI: Real-World Applications and Responsible Use | Simon Elisha (Amazon Web Services) |
Day 2 (December 6) | ||
---|---|---|
09:40~10:55 | Chair: Ji Hoon Kim | |
DeepFold: Recent Advances in Protein Structure Prediction and Beyond | Keehyoung Joo (Korea Institute for Advanced Study) |
|
Deeper, Sharper, Faster: Application of Efficient Transformer to Galaxy Image Restoration | Taehwan Kim (UNIST) |
|
Transforming the Energy Sector with AI – Challenges and Solutions | Jinsul Kim (Chonnam Nat’l U) |
Day 1 (December 5) | |
---|---|
10:00~11:05 | |
Chair: Eunok Paek | |
Human-AI Alignment for Speech Brain-Computer Interfaces | Gopala Krishna Anumanchipalli (UC Berkeley) |
Brain-to-speech: Decoding Intention and Generating Speech from Brain Signals | Seong-Whan Lee (Korea U) |
14:40~16:00 | |
Chair: Ji Hoon Kim | |
You want to be able to Train Language Models Yourself | Kyunghyun Cho (New York U) |
Advancing Science and Society with AI: Real-World Applications and Responsible Use | Simon Elisha (Amazon Web Services) |
Day 2 (December 6) | |
---|---|
09:40~10:55 | |
Chair: Ji Hoon Kim | |
DeepFold: Recent Advances in Protein Structure Prediction and Beyond | Keehyoung Joo (Korea Institute for Advanced Study) |
Deeper, Sharper, Faster: Application of Efficient Transformer to Galaxy Image Restoration | Taehwan Kim (UNIST) |
Transforming the Energy Sector with AI – Challenges and Solutions | Jinsul Kim (Chonnam Nat’l U) |
Talk Title
Human-AI Alignment for Speech Brain-Computer Interfaces
Abstract
We are witnessing a new wave of applications enabled by the Deep Learning revolution in all domains of AI, like audio, image and text. However, we have ways to go to unleash the potential of AI in healthcare, particularly in low-resource scenarios where there may not be a lot of data available from the target end users of technology. This is particularly true in Brain-Computer Interfacing of paralyzed individuals who may have lost ability to vocally communicate. In this talk, I will present a series of works where we have tried to align AI models toward human representations and mechanisms of speech production. We show that current leading approaches in AI (particularly Self-Supervised Learning) hold enormous potential in this endeavor. I will demonstrate our recent attempts in using this paradigm to externally enable spoken communication in paralyzed individuals.
Short bio
Gopala Anumachipalli is the Robert E. And Beverly A. Brooks Assistant Professor in the EECS department at UC Berkeley, where he leads the Berkeley Speech Group. He holds an adjunct position at UCSF, and is a member of Berkeley AI Research (BAIR), and Computational Precision Health (CPH). His group focuses on the science and engineering of spoken language, with application to human health — both for screening speech disorders and externally restoring lost function using Brain Computer Interfaces. He obtained his PhD from Carnegie Mellon University and went to UCSF for postdoctoral training. He has been recognized as a Kavli Fellow, Noyce Innovator, Hellman Fellow, Google Research Scholar, JP Morgan AI Research awardee, among other honors.
Talk Title
Brain-to-speech: Decoding Intention and Generating Speech from Brain Signals
Abstract
Brain-Computer Interface is a technology that enables interaction with external devices by converting brain signals into computer commands. When combined with artificial intelligence, it shows great potential in advancing communication methods. Brain-to-speech refers to systems that translate brain activity, specifically imagined speech, into audible speech. By connecting neural signals directly to human language, these systems offer a way to enhance the naturalness of brain-based communication. Recent breakthroughs in understanding the neural processes behind imagined speech, alongside improvements in speech synthesis, have made the direct translation of brain signals into speech a promising area of research. This presentation introduces the current state of Brain-to-speech technology, with a focus on non-invasive techniques for potential silent communication through brain signals.
Short bio
Dr. Seong-Whan Lee is a distinguished professor at Korea University, where he is the head of the Department of Artificial Intelligence. He received the B.S. degree in computer science and statistics from Seoul National University, in 1984, and the M.S. and Ph.D. degrees in computer science from Korea Advanced Institute of Science and Technology in 1986 and 1989, respectively. In March 1995, he joined the faculty of the Department of Computer Science and Engineering at Korea University, and now he is the tenured full professor. A Fellow of the IAPR (1998), Korean Academy of Science and Technology (2009) and IEEE (2010), he has served several professional societies as chairman or governing board member. His research interests include pattern recognition, artificial intelligence, and brain engineering.
Talk Title
You want to be able to Train Language Models Yourself
Abstract
In this talk, I will discuss three reasons why it is important for each researcher, developer and organization to "be able" to train a large-scale language model themselves. In particular, I will discuss three major challenges faced when using off-the-shelf or closed-door commercial language models; (1) the lack of transparency, (2) the lack of maintainability, and (3) the difficulty in compliance.
Short bio
Kyunghyun Cho is a professor of computer science and data science at New York University and a senior director of frontier research at the Prescient Design team within Genentech Research & Early Development (gRED). He is also a CIFAR Fellow of Learning in Machines & Brains and an Associate Member of the National Academy of Engineering of Korea. He served as a (co-)Program Chair of ICLR 2020, NeurIPS 2022 and ICML 2022. He is also a founding co-Editor-in-Chief of the Transactions on Machine Learning Research (TMLR). He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving MSc and PhD degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He received the Samsung Ho-Am Prize in Engineering in 2021. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.
Talk Title
Advancing Science and Society with AI: Real-World Applications and Responsible Use
Abstract
Artificial intelligence is transforming scientific research and addressing critical societal challenges. This session explores how customers are leveraging AI, including generative AI, to solve real-world problems across industries. Simon Elisha Technologist at AWS will share concrete examples of AI's impact, contrasting current generative AI approaches with previous methods. The talk will also cover best practices for applying these powerful technologies securely and responsibly in your own organization. Join us to learn how AI is driving meaningful progress in science and society, and how you can harness its potential ethically and effectively.
Short bio
Simon Elisha is a visionary technology leader known for his ability to transform technology innovation into tangible business advantages. With over three decades of experience encompassing hands-on development and influential business leadership, His unwavering commitment to innovation is demonstrated by his nine cloud technology patents, establishing him as a trailblazer in the field.
As a global technology personality, Simon founded and hosts The Official AWS Podcast, attracting a global audience with over 22 million downloads and five million listening hours to date.
He is passionate about bridging the gap between government organizations and citizens, empowering organizations to unlock innovation and leverage the efficiency of the cloud. Simon's career includes senior positions at organizations such as Pivotal Software, Cisco, and Hitachi Data Systems. He holds an Honours Degree in Information Technology from Monash University, residing in Melbourne with his family.
Talk Title
DeepFold: Recent Advances in Protein Structure Prediction and Beyond
Abstract
AlphaFold’s groundbreaking AI methods, which earned its developers a Nobel Prize in Chemistry, made significant advances in protein structure prediction and set new standards for accuracy and efficiency. DeepFold was initially developed based on AlphaFold's core approach, incorporating modified loss functions, new template alignment, and global optimization techniques to improve side-chain modeling, which showed promising results in CASP15. Recently, further key enhancements have been made to improve protein structure prediction. These include integrating a Protein Language Model (PLM) for multiple sequence alignment (MSA) search, significantly reducing computational time. Additionally, DeepFold has been re-developed using PyTorch framework to improve code extensibility, training speed, and usability. We also introduced a new protocol to improve multimer prediction by utilizing predicted monomer structures and MSAs generated using PLM. Furthermore, we developed a user-friendly web server to enhance accessibility. Future research aims to overcome the limitations of current MSA search methods by exploring generative AI approaches and expanding the application scope to protein-ligand and protein-nucleic acid interactions.
Short bio
Dr. Keehyoung Joo is a researcher at the Korea Institute for Advanced Study (KIAS), focusing on protein structure prediction, optimization, and its applications. His research leverages artificial intelligence and global optimization techniques to enhance prediction performance and extend its applicability. Dr. Joo contributed to the development of DeepFold, an advanced method that improves both side-chain accuracy and the backbone structure. Recent developments include integrating Protein Language Models for efficient multiple sequence alignment (MSA) generation and launching a user-friendly web server to increase accessibility. He has also made significant contributions to protein structure prediction, achieving excellent results in multiple CASP competitions. Building on this experience, Dr. Joo aims to expand his research into protein interactions and protein design, contributing to advancements in computational life sciences.
Talk Title
Deeper, Sharper, Faster: Application of Efficient Transformer to Galaxy Image Restoration
Abstract
The Transformer architecture has revolutionized the field of deep learning over the past several years in diverse areas. We propose to apply Zamir et al.'s efficient transformer to perform deconvolution and denoising to enhance astronomical images. We conducted experiments using pairs of high-quality images and their degraded versions, and our deep learning model demonstrates exceptional restoration of photometric, structural, and morphological information. When compared with the ground-truth JWST images, the enhanced versions of our HST-quality images reduce the scatter of isophotal photometry, Sersic index, and half-light radius by factors of 4.4, 3.6, and 4.7, respectively, with Pearson correlation coefficients approaching unity. We anticipate that this deep learning model will prove valuable for a number of scientific applications.
Short bio
Taehwan Kim is currently an assistant professor in Artificial Intelligence Graduate School and Department of Computer Science and Engineering at Ulsan National Institute of Science and Technology (UNIST). Previously, he was an applied scientist at Amazon Alexa AI and a lead research scientist at a start-up company, ObEN. Before then, he was a postdoctoral scholar in the Computing and Mathematical Sciences department at the California Institute of Technology working with Prof. Yisong Yue. He completed his PhD in 2016 at Toyota Technological Institute at Chicago advised by Prof. Karen Livescu, master's in Computer Science at USC, and bachelor in Computer Science & Engineering and Mathematics at POSTECH. His main research interests include deep learning, generative models, and multimodal learning.
Talk Title
Transforming the Energy Sector with AI – Challenges and Solutions
Abstract
Energy plays a crucial role in our daily lives, powering everything from electricity to technological advancements and industrial operations. While artificial intelligence (AI) offers transformative potential for the energy industry, it also comes with unique challenges. AI applications require vast amounts of data for model training and insights, and the data collection process often relies on facility-based sensors and systems. This can result in noise or errors, impacting data quality. Our research focuses on overcoming these challenges and maximizing the potential of AI in the energy sector. Key areas of focus include improving energy demand forecasting, optimizing building energy management, utilizing natural language processing for energy-specific applications, and predicting and proactively addressing potential issues. In this presentation, we will highlight the critical role of our research in enhancing the efficiency, sustainability, and decision-making processes in the energy industry.
Short bio
Jinsul Kim received a BS degree in Computer Science from the University of Utah, Salt Lake City, Utah, USA, in 1998 and MS (2004) and PhD (2008) degrees in Digital Media Engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea. From 2004 to 2009, he worked as a researcher at ETRI and later served as an Associate Professor at Korea Nazarene University from 2009 to 2011. He is currently a Professor at Chonnam National University, Gwangju, Korea. In addition to his academic journey, he was at the University of California San Diego from March 2016 to February 2017 for Professor exchange program. He is the Director of G5-AICT research center and co-director of the AI Innovation Hub Research and Development Project at Chonnam National University's Information and Computer Center. He is also the chairman of the National and Public University Intelligence Organization Council. Beyond academia, Prof. Kim actively contributes to international standardization as a member of Korea's national delegation for ITU-T SG13. His research interests include AI, Energy AI, quality of service/experience, edge cloud computing, metaverse, multimedia, and immersive media networks.