Workshop on Human-In-the-Loop Data Analytics
Co-located with SIGMOD 2024 (14 June 2024, Santiago, Chile)

Location: Europa

Past workshops: HILDA 2023 | HILDA 2022 | HILDA 2020 | HILDA 2019 | HILDA 2018 | HILDA 2017 | HILDA 2016

HILDA brings together researchers and practitioners to exchange ideas and results on human-data interaction. It explores how data management and analysis can be made more effective when taking into account the people who design and build these processes as well as those who are impacted by their results.

In HILDA 2022, we implemented a mentoring program (inspired by workshops such as PLATEAU) and are continuing it this year. Our focus is on promising and early-stage research, with a core component of the program being that each paper is assigned a mentor. More details on the process are below.

The theme for this edition of the workshop is HILDA and Large Language Models (LLMs), however, the workshop is not limited to this theme and other topics are also of interest. We encourage research on guidelines and best practices for effective human-LLM collaboration. We also encourage research that questions the role of humans in traditional data pipelines with the emergence of LLMs.


The schedule is not finalized and may change. Please check it again closer to the event.

8:30 Opening Remarks
8:35 Keynote by Tim Kraska: ML and Generative AI is reshaping the entire data service industry, but what should academia do?
Session Chair: Kexin Rong
9:20 Transparent Data Preprocessing for Machine Learning
Sebastian Strasser, Meike Klettke
9:35 Towards Extending XAI for Full Data Science Pipelines
Nadja Geisler, Carsten Binnig
9:50 Guided Querying over Videos using Autocompletion Suggestions
Hojin Yoo, Arnab Nandi
10:05 Break
LLM in the Industry Panel
Moderator: Behrooz Omidvar-Tehrani
10:30 Introduction
10:35 Presentation
  • Xina Luna Dong: Next-Generation Intelligent Assistants for Wearable Devices
  • Hadas Kotek: The making of DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues
  • Tim Kraska and Fatma Ozcan: NL2SQL
  • Raghu Ramakrishnan: Copilots in Microsoft Fabric for SQL Generation and BI Report Generation
11:00 Discussion, Q&A
12:00 Lunch Break
Session Chair: Roee Shraga
14:00 Keynote by Renée Miller: Semantic Benchmark Generation: Can LLMs Generate Better Benchmarks than Humans?
14:45 (Short Paper) “It Took Longer than I was Expecting:” Why is Dataset Search Still so Hard?
Madelon Hulsebos, Wenjing Lin, Shreya Shankar, Aditya Parameswaran
14:55 (Short Paper) Key Insights from a Feature Discovery Use-Case Study
Andra Ionescu, Zeger Mouw, Efthimia Aivaloglou, Asterios Katsifodimos
15:05 Drag, Drop, Merge: A Tool for Streamlining Integration of Longitudinal Survey Instruments
Pratik Pokharel, Juseung Lee, Oliver Kennedy, Jeff Good, Marianthi Markatou, Andrew Talal, Raktim Mukhopadhyay
15:20 More of that, please: Domain Adaptation of Information Extraction through Examples & Feedback
Benjamin Hättasch, Carsten Binnig
15:35 Break
Session Chair: Kexin Rong
16:00 A Diagram Unifying ER and Data Flow Notation For Data Integration and Transformations For Data Science Collaborations
Robin Varghese, Nguyen Phan, Wojciech Macyna, Carlos Ordonez
16:15 CopycHats: Question Sequencing with Artificial Agents
Matan Solomon, Bar Genossar, Avigdor Gal
16:30 LLMs as an Interactive Database Interface for Designing Large Queries
Yilin Li, Deddy Jobson
16:45 Pipe(line) Dreams: Fully Automated End-to-End Analysis and Visualization
Cole Beasley, Azza Abouzied
17:00 Cocoon: Semantic Table Profiling Using Large Language Models
Zezhou Huang, Eugene Wu
17:15 Causal Dataset Discovery with Large Language Models
Junfei Liu, Shaotong Sun, Fatemeh Nargesian
17:30 Closing Remarks

HILDA 2024 Keynote Talks

Our exciting program will feature the following invited keynote speakers to talk about the challenges of human-data interaction.

Title: ML and Generative AI is reshaping the entire data service industry, but what should academia do?

Tim Kraska: Associate Professor, MIT; Director of Applied Science, AWS

Abstract: Machine learning (ML) and Generative AI (GAI) is changing the way we build, operate, and use data systems. For example, ML-enhanced algorithms, such as learned scheduling algorithms and indexes/storage layouts are being deployed in commercial data services, GAI-code assistant help to more quickly develop features, ML-based techniques simplify operations by automatically tuning system knobs, and GenAI-based assistants help to debug operational issues. Most importantly though, Generative AI is reshaping the way users interact with data systems. Even today, all leading cloud providers already offer natural language to SQL (NL2SQL) features as part of their Python Notebook or SQL editors to increase the productivity of analysts. Business-line users are starting to use natural language as part of their visualization platforms or enterprise search, whereas application developers are exploring new ways to expose (structured) data as part of their GAI-based experiences using RAG and other techniques. Some even go so far and say that "English will become the new SQL'' despite the obvious challenges that English is often more ambiguous.
Arguably, industry is leading many of these efforts and they are happening at unprecedented speed - almost every week there is a new product announcement. Yet, a lot of the work feels ad-hoc and many challenges remain to make ML/GAI for systems in all these areas really practical despite all the product announcements. In this talk I will provide an overview of some of these recent developments and outline how the academic solution often differs from the ones deployed in industry. Finally, I will list several opportunities for academia to not only contribute but also build a better, more grounded foundation.

Title: Semantic Benchmark Generation: Can LLMs Generate Better Benchmarks than Humans?

Renée Miller: University Distinguished Professor, Northeastern University; CERC Chair of Data Intelligence, University of Waterloo

Abstract: Data management has traditionally relied on synthetic data generators to generate structured benchmarks, like the TPC suite, where we can control important parameters like data size and its distribution precisely. These benchmarks were central to the success of database management systems. But more and more, data management problems are of a semantic nature. For example, determine if two records (tuples) in different databases refer to the same real world entity (entity matching) or find tables that can be unioned in a semantically meaningful way (table union search). Semantic problems cannot be benchmarked using synthetic data. Our current methods for creating labeled benchmarks involve the manual curation of real data and are not robust or scalable. In this talk, I will consider whether we can use generative AI models, specifically large language models or LLMs, to generate benchmarks for a variety of structured data management problems. Can LLMs replace the need for human curation and labeling? I will discuss some of the challenges and possible ways forward.

HILDA 2024 Industry Panel

This year, we will also be pioneering an industry panel designed to foster meaningful discussions between industry leaders and researchers on the optimization and application of Large Language Models (LLMs) in the tech industry. Our panelists include:

Luna Dong

Xin Luna Dong
Principal Scientist,
Meta Reality Labs

Hadas Kotek

Hadas Kotek
Senior Data Scientist,

Tim Kraska

Tim Kraska
Director of Applied Science, AWS

Fatma Ozcan

Fatma Ozcan
Principal Engineer, Systems Research@Google

Raghu Ramakrishnan

Raghu Ramakrishnan
Technical Fellow and CTO for Data, Microsoft

What to submit

We encourage both standard research papers and more unusual works—for instance papers that describe in-progress work, reports on experiences, question accepted wisdom, raise open problems, or propose speculative new approaches. A HILDA submission should describe work or perspectives that will lead to interesting discussions at the workshop or that the authors want feedback on.

We welcome work that proposes innovations in design to improve the way people can work with data management systems, as well as work that studies empirically how humans interact with existing systems. We welcome research that comes from the traditions of the database systems community, and also reports on industry activities, and research on data topics from communities that study people and organizations. A sample of topics that are in the spirit of this workshop include, but are not limited to:

  • novel query interfaces,
  • interactive query refinement,
  • data exploration and analysis,
  • data visualization,
  • human-assisted data integration and cleaning,
  • perception-aware data processing,
  • database systems designed for highly interactive use cases,
  • empirical studies of database use,
  • evaluating and ensuring fairness in data-driven decision making processes
  • understanding the outcomes of processes through provenance and explanations
  • interactive debugging of complex data systems
  • crowd-powered data infrastructure, etc.

Submissions can also examine any of the above topics from an application or domain perspective.

HILDA is a forum where people from multiple communities engage with one another's ideas. We are keen to have submissions that present initial ideas and visions, just as much as reports on early results, or reflections on completed projects.

The workshop will focus on discussion and interaction, rather than static presentations of what is in the paper.

Review and Mentorship Process

HILDA reviews are single blind. All submitted papers will be reviewed by at least three reviewers who will determine the fit of the work for HILDA's unique mentorship process this year, the quality of the work, and its potential for future research.

Every accepted paper will be assigned a mentor who will engage with the authors providing constructive feedback through one-on-one, virtual, discussions. We hope that the authors will work closely with their mentors to improve the substance and direction of their work.

Authors and mentors can withdraw without repercussions due to unforeseen conflicts. In such situations, the program chairs will try to find another suitable mentor.


Authors are invited to submit papers between four and six pages in length excluding references and using the standard SIGMOD paper formatting template Submissions should reflect the current state of the research work but also include a section on limitations and challenges that they wish to receive feedback from their mentors and the HILDA community on.

We are following SIGMOD24 submission format, i.e., 2-column ACM Proceedings Format, using either the sample-sigconf.tex or Interim layout.docx template provided at for LaTeX (version 2e) or Word, respectively. If you plan to use ACM's official Overleaf template, please use the 2-column template available at

Submission website:


    We will provide links to accepted papers in the program here as well as publish them for a year through the ACM DL.

Important Dates

  • Workshop Date: June 14, 2024
  • Submission (extended): April 7, 2024April 15, 2024 AOE
  • Notification of outcome: May 7, 2024 (Tentative)
  • Camera-ready due: May 30, 2024 (before the workshop)

Workshop Chairs


  • Amir Gilad (The Hebrew University)
  • Amit Somech (Bar-Ilan University)
  • Arvind Satyanarayan (MIT CSAIL)
  • Bar Genossar (Technion - Israel Institute of Technology)
  • Brit Youngmann (Technion - Israel Institute of Technology)
  • Dixin Tang (University of Texas, Austin)
  • Fatemeh Nargesian (University of Rochester)
  • Giuseppe Santucci (University of Rome "La Sapienza")
  • Iddo Drori (Boston University and Columbia University)
  • Aamod Khatiwada (Northeastern)
  • Grace Fan (Northeastern)
  • Oliver A Kennedy (University at Buffalo, SUNY)
  • Senjuti Basu Roy (New Jersey Institute of Technology)
  • Slava Novgorodov (Tel Aviv University)
  • Tiziana Catarci (University of Rome "La Sapienza")
  • Vidya Setlur (Tableau Research)
  • Yannis Katsis (IBM Research)
  • Zhengjie Miao (Simon Fraser University)

Steering Committee

  • Carsten Binnig (TU Darmstadt)
  • Juliana Freire (New York University)
  • Aditya Parameswaran (University of California, Berkeley)
  • Arnab Nandi (The Ohio State University)


For questions, please email the workshop chairs directly.

Follow us

Join us on Twitter.