All times are in Eastern Time (ET).

Schedule

Time
Event
14:00-14:05
Opening + Intro of keynote speaker
14:05-15:00
Opening Keynote
15:00-15:40
Paper Session 1

TransCompressor: LLM-Powered Multimodal Data Compression for Smart Transportation
Huanqi Yang (City University of Hong Kong), Rucheng Wu (City University of Hong Kong), Weitao Xu (City University of Hong Kong)--- Best Paper Award!

AutoJournaling: A Context-Aware Journaling System Leveraging MLLMs on Smartphone Screenshots
Tianyi Zhang (University of Melbourne), Shiquan Zhang (University of Melbourne), Le Fang (University of Melbourne), Hong Jia (University of Melbourne), Vassilis Kostakos (University of Melbourne), Simon D`Alfonso (University of Melbourne)--- Best Presentation Award!
15:40-15:55
Coffee Break
15:55-16:35
Paper Session 2

Efficient and Personalized Mobile Health Event Prediction via Small Language Models
Xin Wang (University of Melbourne), Ting Dang (University of Melbourne), Vassilis Kostakos (University of Melbourne), Hong Jia (University of Melbourne)

Stress-GPT: Stress detection with an EEG-based foundation model
Catherine Lloyd (Cardiff University), Loic Lorente Lemoine (Cardiff University), Reiyan Al-Shaikh (Cardiff University), Kim Tien Ly (University of Oxford), Hakan Kayan (Cardiff University), Charith Perera (Cardiff University), Nhat Pham (Cardiff University))
16:35-16:40
Best Paper Award and Closing Remarks
16:40-18:00
Networking

Keynote Speakers

Using Foundation Models to Coordinate Distributed Learning

Subhas

Carlee Joe-Wong

google scholarcarleepersonal page

Foundation models in machine learning are often used to capture broad patterns within large datasets, with the idea that they can then be fine-tuned or adapted to specific learning tasks. In distributed or federated settings, such foundation models are typically used to initialize a federated learning process: the model is distributed to all clients, which then proceed to alternate training the model on local data and synchronizing the resulting local models. In this talk, we explore using such a pre-trained foundation model not just as a model initialization but as a way to implicitly coordinate learning across distributed clients. We first consider a typical federated learning setting and show that incorporating the starting foundation model throughout the training can stabilize the training, alleviating the challenges of heterogeneous and non-i.i.d. (independent and identically distributed) data distributions across clients. We then consider a sensor network-inspired scenario where clients (i.e., sensors) collect correlated data. Since learning such correlations in real time would require significant sensor communication overhead and compromise sensors’ data privacy, we show that a pre-trained foundation model that captures these correlations can significantly improve sensor predictions.

Bio: Carlee Joe-Wong is the Robert E. Doherty Associate Professor of Electrical and Computer Engineering at Carnegie Mellon University. She received her A.B., M.A., and Ph.D. degrees from Princeton University in 2011, 2013, and 2016 respectively. Her research interests lie in the optimization of networked systems, broadly speaking, most recently focusing on online and distributed machine learning in networked contexts. From 2013 to 2014, she was the Director of Advanced Research at DataMi, a startup she co-founded from her Ph.D. research on mobile data pricing. Her research has received several awards, including the NSF CAREER Award in 2018 and best paper and poster awards from IEEE INFOCOM, ACM SIGMETRICS, and ACM/IEEE IPSN.

Sponsor

acmlogo
cardifflogo
cse-UNSW
cse-unimelb