Dates

Dates.

- [Enrollment] March 2, 2023: Participating teams must enroll by midnight (anywhere on Earth) on March 2, 2023. Enrollment is free and non-binding. Teams can enroll by clicking the button below and filling out the provided form.

Enroll Now

- [Kick-Off Event] March 6, 2023: A kick-off event will be held, with Prof. Philip Thomas providing a high-level introduction to AI safety and fairness topics using Seldonian algorithms, followed by Dr. Austin Hoag providing coding examples of how the Seldonian Toolkit can be used. This event will primarily be held on Zoom, though participants are welcome to attend in-person at UMass Amherst. Further details for this event will be provided to all teams that register.

- [Early-Contest Q&A] March 15, 2023: Prof. Philip Thomas and Dr. Austin Hoag will host an open Q&A session for participating teams. Further details for this event will be provided to all teams that register.

- [Mid-Contest Q&A] March 27, 2023: Prof. Philip Thomas and Dr. Austin Hoag will host an open Q&A session for participating teams. Further details for this event will be provided to all teams that register.

- [Late-Contest Q&A] April 8, 2023: Prof. Philip Thomas and Dr. Austin Hoag will host an open Q&A session for participating teams. Further details for this event will be provided to all teams that register.

- [Final Submissions] April 21, 2023: Final submissions are due at midnight (anywhere on Earth) on April 21, 2023. See the "Submissions" section below for more information about what should be submitted.

- [Award Announcement] April 28, 2023: We aim to announce the winners of the competition by April 28, 2023. The exact date will depend on the volume of submissions.

Eligibility 

Participants may participate individually or in teams of up to three people. All participants must be enrolled at an accredited college or university in the US or Canada at either the undergraduate or graduate (masters or doctoral) level.

Project and Submission Requirements

Participating teams should select an application of the Seldonian Toolkit. There are no restrictions on the allowed applications. We recommend that you select an application for which you have access to training data and which your team members are familiar with. This could range from predicting how far landslides will travel based on features of the slope, with safety guarantees related to the chance of under-predictions, to predicting whether a tumor is benign or malignant with safety guarantees with respect to the false negative rate, to predicting whether someone will commit a crime in the future while enforcing fairness constraints with respect to race, gender, or age. Some teams might already have applications in mind, while others might begin by brainstorming possible applications. For teams still trying to select an application, we recommend searching for datasets that relate to machine learning problems where safety or fairness guarantees would be beneficial.

After selecting an application, teams should apply the Seldonian Toolkit. In almost all cases, teams should use the Experiments component of the toolkit to show how effective the Seldonian Toolkit is for their application. The Experiments component is described in this tutorial. It provides plots that show how accurate the learned models are, how much data was required before the system could reliably return solutions, and how often the system violated the desired safety or fairness constraints.

 

Next, teams should put together a report describing their application, its importance, how the Seldonian Toolkit was applied, and the resulting performance of their system. This report should be provided as a markdown (.md) file in a GitHub repository. Each team should then fill out the submission form linked below, which should include a link to the GitHub repository. This is an example of what a team might submit.

Prizes

A prize pool of $2,000 USD will be divded into several different awards, and some of the best submissions will be featured on this website. Additional details are provided below in the section titled "Awards and Evaluation Criteria".

Judging Criteria and Winner Selection

Awards will be provided for the following topics. Evaluation will be subjective and performed by a panel of AI faculty and postdoctoral researchers from UMass Amherst, Stanford University, Brown University, DeepMind, and OpenAI. Teams can only win one award. Submissions are automatically considered for all awards for which the submitting team is eligible.

 

Best Overall Project [$600] Projects will be evaluated based on the potential positive impacts of the proposed application, the performance of the trained system, the feasibility of the proposed application to the real-world, and the clarity and quality of the submitted report and code. All teams are eligible for this award.

Best Overall Project (Undergraduate) [$500] This award is similar to the Best Overall Project, but is restricted to teams that include only undergraduate students. If the winner of the Best Overall Project includes only undergraduate students, the second best project from an all-undergraduate team will win this award.

Most Creative Project [$300] Projects will be evaluated based on the same criteria as the Best Overall Project award, but with a significant emphasis placed on the creativity and novelty of the proposed application. The panel of reviewers each have decades of experience working on AI research, and have seen many uses of AI and machine learning. Submissions that are competative for this award will involve applications that the review panel have not seen before, or which are less common in the AI and ML literature.

Best Runner-Up Application with Fairness Considerations [$300] Projects will be evaluated based on the same criteria as the Best Overall Project award. This award is restricted to projects related to fairness and equality.

Best Runner-Up Application with Safety Considerations [$300] Projects will be evaluated based on the same criteria as the Best Overall Project award. This award is restricted to projects with safety constraints not related to fairness and equality.