diff --git a/_data/navigation.yml b/_data/navigation.yml index 03a3bfa..434e9af 100644 --- a/_data/navigation.yml +++ b/_data/navigation.yml @@ -14,8 +14,8 @@ main: #- title: "Proceedings" # url: "https://aclanthology.org/events/emnlp-2022/" - #- title: "Program" - # url: /program/ + - title: "Program" + url: /program/ #- title: "Registration" # url: /registration/ @@ -54,17 +54,17 @@ main: # url: /ethics/faq/ -#program: -# - title: Program Details -# children: -# - title: "Conference Program" -# url: /program/ +program: + - title: Programs + children: + - title: "Conference Overview" + url: /program/ # - title: "Keynote" # url: /program/keynotes/ # - title: "Workshops" # url: /program/workshops/ -# - title: "Tutorials" -# url: /program/tutorials/ + - title: "Tutorials" + url: /program/tutorials/ # - title: "Careers in NLP" # url: /program/careers_in_nlp/ # - title: "Social Programs" diff --git a/_pages/committees/organization.md b/_pages/committees/organization.md index 9e1c900..94171ce 100644 --- a/_pages/committees/organization.md +++ b/_pages/committees/organization.md @@ -89,7 +89,7 @@ name="Luis Chiruzzo" picture="/assets/images/committee/luis_chiruzzo.jpeg" site="https://scholar.google.com/citations?user=C7c4uCsAAAAJ&hl=es" institution="Universidad de la República" -email = "luis.chiruzzo@gmail.com" +email = "luis.chiruzzo@gmail.com,luischir@fing.edu.uy" %} {% include committee-member.html diff --git a/_pages/home.md b/_pages/home.md index 4608d64..ef052d2 100644 --- a/_pages/home.md +++ b/_pages/home.md @@ -3,11 +3,16 @@ title: "The 62nd Annual Meeting of the Association for Computational Linguistics layout: splash permalink: / header: - overlay_image: "/assets/images/bangkok/bangkok.jpg" - caption: 'Photo by Kanapol Vorapoo on Unsplash' -excerpt: "Bangkok, Thailand
August 12–17, 2024" + overlay_image: "/assets/images/bangkok/bangkok-banner.jpeg" + caption: 'Photo by boykpc on iStock' +excerpt: "Bangkok, Thailand
August 11–16, 2024" --- +## Welcome! + +The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) will take place in **Bangkok, Thailand** from **August 11th to 16th, 2024**. +More information will be announced soon. + ## News **The official ACL 2024 website is launched.** @@ -91,8 +96,13 @@ The [**Conference Program Schedule**](/program/) is now online. !--> + ## Important Dates +Tutorials | Sunday | August 11, 2024 | +Main Conference | Monday – Wednesday | August 12 – 14, 2024 | +Workshop | Thursday - Friday | August 15 -16, 2024 | + -| Conference date | Monday – Saturday | August 12 – 17, 2024 | -{: .dates-table} - -## Welcome! -ACL 2024 will take place in Bangkok, Thailand from **August 12th to 17th, 2024**. + diff --git a/_pages/program/program.md b/_pages/program/program.md new file mode 100644 index 0000000..6703c9e --- /dev/null +++ b/_pages/program/program.md @@ -0,0 +1,14 @@ +--- +title: Conference Overview +layout: single +excerpt: "ACL 2024 Conference Overview." +permalink: /program/ +sidebar: + nav: "program" +--- + +Information will be updated soon. + + + + diff --git a/_pages/program/tutorials.md b/_pages/program/tutorials.md new file mode 100644 index 0000000..9a88a0c --- /dev/null +++ b/_pages/program/tutorials.md @@ -0,0 +1,43 @@ +--- +title: Tutorials +layout: single +excerpt: "ACL 2024 Conference Overview." +permalink: /program/tutorials/ +sidebar: + nav: "program" +--- + +

List of Tutorials

+The following tutorials have been accepted for ACL 2024.
+More information will be updated soon +


+ +**AI for Science in the Era of Large Language Models**
+ Zhenyu Bi, Minghao Xu, Jian Tang and Xuan Wang +* The capabilities of AI in the realm of science span a wide spectrum, from the atomic level, where it solves partial differential equations for quantum systems, to the molecular level, predicting chemical or protein structures, and even extending to societal predictions like infectious disease outbreaks. Recent advancements in large language models (LLMs), exemplified by models like ChatGPT, have showcased significant prowess in tasks involving natural language, such as translating languages, constructing chatbots, and answering questions. When we consider scientific data, we notice a resemblance to natural language in terms of sequences – scientific literature and health records presented as text, bio-omics data arranged in sequences, or sensor data like brain signals. The question arises: Can we harness the potential of these recent LLMs to drive scientific progress? In this tutorial, we will explore the application of large language models to three crucial categories of scientific data: 1) textual data, 2) biomedical sequences, and 3) brain signals. Furthermore, we will delve into LLMs' challenges in scientific research, including ensuring trustworthiness, achieving personalization, and adapting to multi-modal data representation. + +**Automatic and Human-AI Interactive Text Generation (with a focus on Text Simplification and Revision)**
+ Yao Dou, Philippe Laban, Claire Gardent and Wei Xu +* In this tutorial, we focus on text-to-text generation, a class of natural language generation (NLG) tasks, that takes a piece of text as input and then generates a revision that is improved according to some specific criteria (e.g., readability or linguistic styles), while largely retaining the original meaning and the length of the text. This includes many useful applications, such as text simplification, paraphrase generation, style transfer, etc. In contrast to text summarization and open-ended text completion (e.g., story), the text-to-text generation tasks we discuss in this tutorial are more constrained in terms of semantic consistency and targeted language styles. This level of control makes these tasks ideal testbeds for studying the ability of models to generate text that is both semantically adequate and stylistically appropriate. Moreover, these tasks are interesting from a technical standpoint, as they require complex combinations of lexical and syntactical transformations, stylistic control, and adherence to factual knowledge, -- all at once. With a special focus on text simplification and revision, this tutorial aims to provide an overview of the state-of-the-art natural language generation research from four major aspects -- Data, Models, Human-AI Collaboration, and Evaluation -- and to discuss and showcase a few significant and recent advances: (1) the use of non-retrogressive approaches; (2) the shift from fine-tuning to prompting with large language models; (3) the development of new learnable metric and fine-grained human evaluation framework; (4) a growing body of studies and datasets on non-English languages; (5) the rise of HCI+NLP+Accessibility interdisciplinary research to create real-world writing assistant systems. + +**Computational Expressivity of Neural Language Models**
+ Alexandra Butoi, Ryan Cotterell and Anej Svete +* Language models (LMs) are currently at the forefront of NLP research due to their remarkable versatility across diverse tasks. However, a large gap exists between their observed capabilities and the explanations proposed by established formal machinery. To motivate a better theoretical characterization of LMs' abilities and limitations, this tutorial aims to provide a comprehensive introduction to a specific framework for formal analysis of modern LMs using tools from formal language theory (FLT). We present how tools from FLT can be useful in understanding the inner workings and predicting the capabilities of modern neural LM architectures. We will cover recent results using FLT to make precise and practically relevant statements about LMs based on recurrent neural networks and transformers by relating them to formal devices such as finite-state automata, Turing machines, and analog circuits. Altogether, the results covered in this tutorial will allow us to make precise statements and explanations about the observed as well as predicted behaviors of LMs, as well as provide theoretically motivated suggestions on the aspects of the architectures that could be improved. + +**Presentation Matters: How to Communicate Science in the NLP Venues and in the Wild?**
+ Sarvnaz Karimi, Cecile Paris and Gholamreza Haffari +* Each year a large number of early career researchers join the NLP/Computational Linguistics community, with most starting by presenting their research in the *ACL conferences and workshops. While writing a paper that has made it to these venues is one important step, what comes with communicating the outcome is equally important and sets the path to impact of a research outcome. In addition, not all PhD candidates get the chance of being trained for their presentation skills. Research methods courses are not all of the same quality and may not cover scientific communications, and certainly not all are tailored to the NLP community. We are proposing an introductory tutorial that covers a range of different communication skills, including writing, oral presentation (posters and demos), and social media presence. This is to fill in the gap for the researchers who may not have access to research methods courses or other mentors who could help them acquire such skills. The interactive nature of such a tutorial would allow attendees to ask questions and clarifications which would not be possible from reading materials alone. + +**Vulnerabilities of Large Language Models to Adversarial Attacks**
+ Yu Fu, Erfan Shayegan, Md. Mamun Al Abdullah, Pedram Zaree, Nael Abu-Ghazaleh and Yue Dong +* This tutorial serves as a comprehensive guide on the vulnerabilities of Large Language Models (LLMs) to adversarial attacks, an interdisciplinary field that blends perspectives from Natural Language Processing (NLP) and Cybersecurity. As LLMs become more complex and integrated into various systems, understanding their security attributes is crucial. However, current research indicates that even safety-aligned models are not impervious to adversarial attacks that can result in incorrect or harmful outputs. The tutorial first lays the foundation by explaining safety-aligned LLMs and concepts in cybersecurity. It then categorizes existing research based on different types of learning architectures and attack methods. We highlight the existing vulnerabilities of unimodal LLMs, multi-modal LLMs, and systems that integrate LLMs, focusing on adversarial attacks designed to exploit weaknesses and mislead AI systems. Finally, the tutorial delves into the potential causes of these vulnerabilities and discusses potential defense mechanisms. + +**Watermarking for Large Language Model**
+ Xuandong Zhao, Yu-Xiang Wang and Lei Li +* As AI-generated text increasingly resembles human-written content, the ability to detect machine-generated text becomes crucial in both the computational linguistics and machine learning communities. In this tutorial, we aim to provide an in-depth exploration of text watermarking, a subfield of linguistic steganography with the goal of embedding a hidden message (the watermark) within a text passage. We will introduce the fundamentals of text watermarking, discuss the main challenges in identifying AI-generated text, and delve into the current watermarking methods, assessing their strengths and weaknesses. Moreover, we will explore other possible applications of text watermarking and discuss future directions for this field. Each section will be supplemented with examples and key takeaways. + + + + + +