Skip to content

MzFOCOchinadoll/GenerativeAIandHCI.github.io

 
 

Repository files navigation

layout title description permalink
home
GenAICHI 2024
CHI Workshop on Generative AI and HCI
/

A conference seminar involving with an AI that does not simply categorize data and interpret text as determined by models, but instead creates something new—e.g., in images, molecules, or designs. This work moves the potential of AI systems from problem solving to problem finding, and it tends to change the “role” of the AI from decision-maker to human-supporter. The session is focussed on various aspects of generative AI and its interactions with humans, including new sociotechnical opportunities for work and recreation that are afforded by powerful new interfaces.

Created with Adobe Firefly

Outline

Please join us for the third Generative AI and HCI workshop - this year at CHI 2024. We previously ran workshops in [2022]({% link 2022.md %}) and [2023]({% link 2023.md %}).

In the past year, we have seen or made powerful tools that can create images from textual descriptions or conduct reasonably coherent conversations, make writing suggestions for creative writers, and write code as a pair programmer. We have also seen claims of what an historical person “really looked like," and of a “completed” version of a musical compositions left unfinished by their composer’s untimely death. What all of these examples have in common is that the AI does not simply categorize data and interpret text as determined by models, but instead creates something new—e.g., in images, molecules, or designs. This work moves the potential of AI systems from problem solving to problem finding, and it tends to change the “role” of the AI from decision-maker to human-supporter. Following a successful CHI workshop in 2022, we focus on various aspects of generative AI and its interactions with humans, including

  • new sociotechnical opportunities for work and recreation that are afforded by powerful new interactive capabili- ties
  • novel design challenges of systems that produce a different outcome after each invocation
  • ethical issues related to their design and use; and
  • useful patterns for collaboration between humans and generative AI in different domains

Generative AI can be defined as an AI system that uses existing media to create new, plausible media. This scope is broad, and the generative potential of AI systems varies greatly. Over the last decade, we have seen a shift in methodology moving from expert systems based on patterns and heavy human curating towards stochastic and generative models such as Generative Adversarial Networks (GANs) that use big data to produce convincingly human-like results in various domains, and Large Language Models (LLMs) that can generate text, source code, and images from simple instructions (“prompts”).

Past Workshop Proceedings

The Generative AI and HCI workshop has been running since 2022. Here you can find links to past workshop proceedings including calls for papers, programs, and accepted workshop paper.

  • [2022 Proceedings]({% link 2022.md %})
  • [2023 Proceedings]({% link 2023.md %})

Program {#program}

All times are in HST local time. All presentations take place in Gather Town.

09:00-09:20 Welcome + Introductions (chair: Greg Walsh)

09:20-09:50 Paper session 1: Prompting (chair: Greg Walsh)

09:50-10:20 Paper session 2: Creativity: Media (chair: Greg Walsh)

10:20-11:00 Morning Break

11:00-11:20 Paper session 3: Creativity: Media: Trade-offs (chair: Charles Martin)

11:20-11:45 Paper session 4: Creativity: Text (chair: Charles Martin)

  • 11:20 [Workplace Everyday-Creativity through a Highly-Conversational UI to Large Language Models Michael Muller (IBM); Jessica He (IBM Research); Justin Weisz (IBM Research AI).]({% link papers/2024/genaichi2024_15.pdf %})
  • 11:25 [Working with Large Language Models in the Rapid Ideation Process. Gionnieve Lim (Singapore University of Technology and Design); Simon Perrault (Singapore University of Technology and Design).]({% link papers/2024/genaichi2024_22.pdf %})
  • 11:30 Discussion.

11:45-12:20 Posters 1

12:20-14:00 Lunch

14:00-14:25 Posters 2

14:25-14:50 Paper session 5: Values: Harms (chair: Mary Lou Maher)

  • 14:25 [From Melting Pots to Misrepresentations: Exploring Harms in Generative AI. Sanjana Gautam (Pennsylvania State University); Pranav N Venkit (Pennsylvania State University ); Sourojit Ghosh (University of Washington).]({% link papers/2024/genaichi2024_34.pdf %})
  • 14:30 The Need for Flexible Interfaces for Text-to-Image Auditing: A Case Study of DALL·E 2 and DALL·E 3. Clare Provenzano (Simon Fraser University); Parsa Rajabi (Simon Fraser University); Diana Cukierman (Simon Fraser University); Nicholas Vincent (Simon Fraser University).
  • 14:35 [How an AI Generated Experience Impacts Negative Perceptions of AI. MJ Johns (University of California Santa Cruz); Tyler Coleman (University of California).]({% link papers/2024/genaichi2024_2.pdf %})
  • 14:40 Discussion

14:50-15:20 Paper session 6: Values: Process and Media (chair: Mary Lou Maher)

15:20-16:00 Aftenoon Break

16:00-16:20 Paper session 7: Analysis (chair: Charles Martin)

  • 16:00 [Can Nuanced Language Lead to More Actionable Insights? Exploring the Role of Generative AI in Analytical Narrative Structure. Vidya Setlur (Tableau Research); Larry Birnbaum (Salesforce/Northwestern University).]({% link papers/2024/genaichi2024_10.pdf %})
  • 16:05 [Unlocking the User Experience of Generative AI Applications: Design Patternsand Principles. Vinita Tibdewal (Google USA).]({% link papers/2024/genaichi2024_47.pdf %})
  • 16:10 Discussion

16:20-16:50 Posters 3

16:50-17:20 Closing (chair: Charles Martin) [slides]({% link papers/2024/genaichi2024_closing.pdf %})

Posters {#posters}

All posters are listed by theme below, all will be presented concurrently during the three poster periods shown in the program.

Tools or Partners

  • [Loremaster: Towards Better Mixed-Initiative Content Co-Creation in the Creative Industries. Oliver H Wood (Those Beyond).]({% link papers/2024/genaichi2024_5.pdf %})
  • [Generating Summary Videos from User Questions to Support Video-Based Learning. Kazuki Kawamura (The University of Tokyo); Jun Rekimoto (The Univertsity of Tokyo).]({% link papers/2024/genaichi2024_25.pdf %})
  • [Exploring Human-AI Collaboration Continuum in Augmented Reality Applications. Shokoufeh Bozorgmehrian (Virginia Tech); Joseph Gabbard (Virginia Tech).]({% link papers/2024/genaichi2024_41.pdf %})
  • How GenAI Can Affect Design Work. Samangi Wadinambiarachchi (The University of Melbourne)*; Jenny Waycott (The University of Melbourne); Ryan Kelly (RMIT University); Eduardo Velloso (The University of Sydney); Greg Wadley (University of Melbourne, AUS).

Prompting

Creativity

  • Uncovering Challenges and Changes in Artists’ Practices as a Consequence of AI. Petra Jääskeläinen (KTH); Anna-Kaisa Kaila (KTH Royal Institute of Technology, Stockholm); Andre Holzapfel (KTH Royal Institute of Technology in Stockholm).
  • Food Development through Co-creation with AI: bread with a "taste of love". Takuya Sera (NEC Corporation); Izumi Kuwata (NEC Corporation); Yuki Taya (NEC Corporation); Noritaka Shimura (NEC Corporation); Yosuke Motohashi (NEC Corporation).
  • [Towards a Hierarchy of Trust in Human-AI Music-Making. Adrian Hazzard (University of Nottingham); Craig Vear (University of Nottingham); Solomiya Moroz (University of Nottingham).]({% link papers/2024/genaichi2024_28.pdf %})
  • [Generative AI for Musicians: Small-Data Prototyping to Design Intelligent Musical Instruments. Charles Patrick Martin (Australian National University).]({% link papers/2024/genaichi2024_50.pdf %})
  • [Unstuck: Charting the Design Space of Generative AI-based Creativity Interventions. Matthew K Hong (Toyota Research Institute); Pablo Paredes (Toyota Research Institute); Shabnam Hakimi (Toyota Research Institute); Monica Van (Toyota Research Institute); Matthew Klenk (Toyota Research Institute).]({% link papers/2024/genaichi2024_14.pdf %})

Harms: Privacy

  • [Exploring ChatGPT’s ability to detect privacy violations in photo sharing. Arun Balaji Buduru (IIIT Delhi); Apu Kapadia (Indiana University)*; Christine Chen (Indiana University); Chris Page (Indiana University); Kendrick Mernitz (Indiana University); Ben Malone (Indiana University).]({% link papers/2024/genaichi2024_30.pdf %})
  • [Closing the Loop: Embedding Observability in the GenAI Product Lifecycle for Systematic Bias Mitigation. Freyam Mehta (International Institute of Information technology Hyderabad); Nimmi Rangaswamy (International Institute of Information technology Hyderabad).]({% link papers/2024/genaichi2024_49.pdf %})
  • [On CLIP's Ability of Analyzing Fake Images at a Large Scale: Why are they fake? Jinbin Huang (Arizona State University); Chen Chen (University of Maryland, College Park); Aditi Mishra (Arizona State University); Bum Chul Kwon (IBM Research); Zhicheng Liu (University of Maryland, College Park); Chris Bryan (Arizona State University).]({% link papers/2024/genaichi2024_8.pdf %})
  • [Detecting Generative AI Usage in Application Essays. Neil Natarajan (University of Oxford).]({% link papers/2024/genaichi2024_9.pdf %})
  • [Holding the Line: A Study of Writers’ Attitudes on Co-creativity with AI. Morteza Behrooz (Meta).]({% link papers/2024/genaichi2024_11.pdf %})

Values: Synthetic

  • [How can LLMs support UX Practitioners with image-related tasks? Ruican Zhong (University of Washington); Gary Hsieh (University of Washington); David McDonald (University of Washington).]({% link papers/2024/genaichi2024_29.pdf %})
  • [Exploring the role of Generative AI models for research in the public sector. Kelly McConvey (University of Toronto); Erina Moon (University of Toronto); Shion Guha (University of Toronto).]({% link papers/2024/genaichi2024_42.pdf %})
  • Machine Conversations as Design Conversations. Alice Cai (Harvard University); Celine Offerman (Delft University of Technology); Shiyan Zhang (Stevens Institute of Technology); Lydia B Chilton (Columbia University); Kevin Crowston (Syracuse University); Jeffrey V Nickerson (Stevens Institute of Technology).

Emotion

  • [Crafting Emotional TTS Conversation Responses Based on User Preferences. Laya Iyer (Stanford University), Sanmi Koyejo (Stanford University).]({% link papers/2024/genaichi2024_23.pdf %})
  • [Physiological Signals as Implicit Multimodal Input in Human-AI Interactions During Creative Tasks. Teodora Mitrevska (LMU).]({% link papers/2024/genaichi2024_59.pdf %})

Presentation and Attendence Details {#presentation}

This hybrid workshop will involve participants at their own locations around the world and in the CHI conference venue in Hawaiʻi.

The primary location for all paper and poster presentations will be online on Gather Town.

![Gather Town Conference Lobby]({% link images/genaichi-gathertown.jpg %})

  • First authors and registered workshop participants have received links for the Gather Town space via email.
  • Please make sure that you use the above emailed link.
  • Gather Town works best in Chrome/Chromium or in the dedicated Gather application.
  • If you are on site in Hawai'i please remember to bring a device with you that can run Gather (either in app or browser) to see the posters and talks.
  • If you have trouble with Gather Town's screen sharing, audio, or video, see their troubleshooting page.

N.B. You should have received a link for the Gather Town Space via email.

Please note that the organisers (Charles, Greg, Mary Lou, Anna, and Michael) will not be onsite. If you don't log into our gather space, we won't be able to find you so make sure you test out Gather and have a device with you whether you are remote or onsite.

Poster Presentation Instructions {#poster-pres}

You should create a poster to display in the poster area in our Gather space with the following specifications:

  • landscape orientation
  • at least 1000x600px resolution
  • png or jpg file
  • <=3MB

Posters should be emailed to Greg Walsh ([email protected]) for installation into the poster session.

  • There are three times for poster sessions but all posters will be available in all sessions.
  • We ask that all presenters be present for at least one poster session over the day (to assist with timezone/location).

Paper Presentation Instructions {#paper-pres}

  • Paper presentations are only 5 minutes so you should have a very short version of your work to present, e.g., 3-4 slides.
  • You will be able to screen share and address the virtual room in our Gather space but please remember to test your audio/video/screen sharing setup before the workshop
  • We ask that all presenters log into Gather at the start of the session and be present in the paper presentation space.
  • Please make sure you test screen sharing in Gather before the workshop 😊

Topics and Themes

Our workshop is open to diverse interpretations of interactive generative AI, characterized by the AI systems’ abilities to make new things, learn new things and foster serendipity and emergence. We are interested in several dimensions of generative AI, including mixed initiative, human–computer collaboration, or human–computer competition, with the main focus on interaction between humans and generative AI agents. We welcome researchers from various disciplines, inviting researchers from different creative domains including, but not limited to art, images, music, text, style transfer, text-to-image, programming, architecture, design, fashion and movement. Because of the far-reaching implications of Generative AI, we propose the following list of non-exhaustive, thematic questions to guide our discussions at the workshop:

  • What is generative AI and how can we leverage diverse definitions of it? Does generative AI go beyond intelligent interaction? What distinguishes generative AI?
  • How do you design in this characteristically uncertain space? What design patterns do we need to think about? How does uncertainty play into this and how to we help people set expectations to designing with AI?
  • Do generative AI algorithms contribute needed serendipity to the design process—or simply randomness—or worse, chaos?
  • Is presenting AI as a desirable and “objective” method appropriate for generative AI?

We encourage people to write and answer their own questions as well. We hope that the workshop leads to new ways-of-thinking.

These themes can be addressed within the following topics:

  • The emerging capabilities of generative AI.
  • Generative AI existence in different domains including (but not limited to) images, music, text, design, and motion.
  • The role of generative AI in assisting, replacing, and regimenting human work.
  • Human-AI collaboration and co-creative systems.
  • Ethical issues including misuses and abuses, provenance, copyright, bias, and diversity.
  • The uncanny valley in Human-AI interactions.
  • Speculative futures of generative AI and its implications for human-AI utopias and dystopias.

As above, we encourage people to add new topics and domains.

Contributing Your Work

Submissions may be up to 4 pages long (references may appear on additional pages), following the CHI 2024 instructions for papers.

The deadline for submissions and submission website is found at the top of the page.

Please send any comments or questions to Michael Muller, [email protected].

Accepted papers will be presented in the workshop and authors can choose to publish their paper here on the workshop website under a Creative Commons Attribution 4.0 International License (CC BY 4.0).

Program Committee

  • Alexa Steinbrück, Hochschule für Gestaltung Schwäbisch-Gmünd
  • Andre Holzapfel, KTH Royal Institute of Technology in Stockholm
  • Anna Kantosalo, University of Helsinki
  • Anna-Kaisa Kaila, KTH Royal Institute of Technology, Stockholm
  • Anthony Jameson, Contaction AG
  • Benedikte Wallace, University of Oslo
  • Briane Paul Samson, De La Salle University
  • Charles Martin, Australian National University
  • Cory Newman, University of Baltimore
  • Deepak Giri, Indiana University
  • Dmitry Zubarev, IBM Research-Almaden
  • Erinma Ochu, UWE Bristol
  • Gaole He, Delft University of Technology
  • Garrett Allen, Delft University of Technology
  • Greg Walsh, University of Baltimore
  • Han Li, National University of Singapore
  • Hernisa Kacorri, University of Maryland, College Park
  • Hyo Jin Do, IBM Research
  • Imke Grabe, IT University of Copenhagen
  • Jacquelyn Martino, IBM Research
  • Jessica He, IBM Research
  • Jordan Aiko Deja, De La Salle University
  • Jussi Holopainen, City University of Hong Kong
  • Justin Weisz, IBM Research AI
  • Lorenzo Corti, Delft University of Technology
  • Manish Nagireddy, IBM Research
  • Mary Lou Maher, Computer Research Association
  • Matthew Klenk, Toyota Research Institute
  • Matthew Hong, Toyota Research Institute
  • Michael Hind, IBM Research
  • Michael Muller, IBM Research
  • Minsik Choi, The Australian National University
  • Nabila Chowdhury, University of Toronto
  • Paolo Grigis, Unibz
  • Rgee Wharlo Gallega, Future University Hakodate
  • Richard Lance Parayno, University of Salzburg
  • Sachita Nishal, Northwestern University
  • Shabnam Hakimi, Toyota Research Institute
  • Tae Soo Kim, KAIST
  • Viktoria Pammer-Schindler, Graz University of Technology
  • Werner Geyer, IBM Research
  • Yash Tadimalla, University of North Carolina Charlotte
  • Yichen Wang, Australian National University
  • Zijian Ding, The University of Maryland

Organizers

Anna Kantosalo is a Postdoctoral Researcher at the University of Helsinki. The focus of her research is Human--Computer Co-Creativity and she is defining models and methods for building and describing systems in which humans and autonomous creative agents can work together. She has chaired the Future of Co-Creative Systems workshop adjoined with the International Conference on Computational Creativity twice.

Mary Lou Maher is a Professor in the Software and Information Systems Department at the University of North Carolina at Charlotte. Her early research in AI-based generative design has lead to a human centered approach to computational creativity and co-creative systems. She has Chaired the Creativity and Cognition Conference (2019) and the International Conference on Computational Creativity (2019) as well as organized several workshops on AI-based design and creativity.

Charles Patrick Martin is a Lecturer in Computer Science at the Australian National University. Charles works at the intersection of music, AI/ML and HCI. He studies how humans can interact creatively with intelligent computing systems and how such systems might fit in the real world. Charles has organised multiple generative-AI-focused workshops at the New Interfaces for Musical Expression conference.

Michael Muller works as a Senior Research Scientist at IBM Research in Cambridge MA USA. With colleagues, he has analyzed how domain experts make use of generative AI outcomes, and how humans intervene between "the data" and "the model" as aspects of responsible and accountable data science work. His research occurs in a hybrid space of Human-Centered AI (HCAI), Human-Computer Interaction (HCI), design, and social justice.

Greg Walsh is an associate professor at the University of Baltimore where he teaches courses in Design. He is an interaction design researcher who focuses on user-centered, inclusive design for children and adults. His work seeks to include more voices in the design process and has been a recipient of a prestigious Google Faculty Research Award. His work has included participatory design sessions in Baltimore City libraries and is now exploring the use of generative AI as a co-design partner.

About

Website for CHI Workshop on Generative AI and HCI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Ruby 100.0%