2024-2025 NHSEB Regional Case 3 It Tastes Like Dog Food Study Guide with Bonus AI Script Experiments

Here’s a really nice study guide from Coach Michael Andersen with two superb generative AI experiments on the case, as well as a bonus guide on evaluating sources on controversial topics.

Mr. A is going above and beyond per usual! And I think the generative AI engagement stuff is especially cool. Give his strategies a try with other cases and let us know what’s working, what isn’t, etc.

AI and Ethics Bowl Round 2: ChatGPT Wrote Our Presentation?

Fast, free and virtually undetectable, ChatGPT offers a tempting combination of ease and stealth. While it can be used as an on demand, universal tutor for the ambitiously inquisitive, it can also serve a secret substitute thinker for the time-pressed, disillusioned or simply unscrupulous.

The line between learning aid and cheatbot isn’t obvious. But there are clear cases. Ask it to help you understand Parfit’s Repugnant Conclusion? Sure. Direct it to write a paper on Parfit’s Repugnant Conclusion which you plan to submit as your original work? No.

Similar logic would seem to apply to Ethics Bowl. Enthusiastic, dedicated bowlers can expand their thinking after hours, engaging a tireless conversation partner with an unmatchable knowledge base, and they can do it without the fear of asking a stupid question or suggesting something taboo. On the other hand… a team could feed AI the case and discussion questions (ChatGPT now has direct access to the internet – just provide a hyperlink to the case set), subcontract every bit of the analysis with the right prompts (see the experiments at the end of the attached article), memorize and regurgitate a received view, and as a result learn and grow very little. Such a team might score well on their initial presentation, but would risk an embarrassing exposure during judge Q&A. Maybe judge interaction will be our primary weapon for combating chatbot abuse. But rest assured that in this season’s bowls, many, many teams will have used ChatGPT and services like it. It is therefore incumbent upon the Ethics Bowl community to think hard (and fast) about appropriate guidelines, and to share them as a baseline to be refined as soon as possible. Even if imperfect, almost any guidance would be preferable to silence, for silence implies anything goes.

Back in April, we invited ChatGPT itself to write this article on the risks and promise of using it for Ethics Bowl prep. Today, naturally intelligent organic person Michael Andersen adds to the discussion with the below article. Organizers, judges, coaches: if you’re not convinced this is a risk (an AI drawback denier), click one of Michael’s experiments at the end of the article. Still not worried? I actually had second thoughts about publishing the prompts he used to guide the AI to provide a full presentation script. But the community needs to understand the tool’s power. Plus, if Gen X dinosaurs like Michael and myself can stumble our way through an AI conversation, the Gen Z tech wizards whom we work so hard to honorably mentor aren’t likely to learn anything new.

Last, if you have thoughts on acceptable use of AI for Ethics Bowl prep, please share them in a comment. We’re also considering some sort of video discussion in the near future – shoot me an email if you’d like to be included, and thanks to everyone in the community who’s taking this topic seriously.

AI and Ethics Bowl: Enhancing or Undermining Critical Thinking Skills?

The following article, including the title, was written by ChatGPT. Why? Earlier this semester I caught a student using AI to complete their weekly reflection prompt in my Ethics Intro. I gave them a zero, but was encouraged by Ethics Olympiad (Australian) students, a retired mentor (thanks, Gary!) and others to try AI for myself, and think about how we might use it symbiotically – to think of it as a valuable aid rather than an enemy or competitor. Enjoy, Matt

As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, ethical questions about its use and development are becoming more pressing. One area where these questions are particularly relevant is in the field of applied ethics, where AI is being used to think through complex ethical cases. One example of this is Ethics Bowl, a competition in which teams of students analyze and debate ethical dilemmas. Some teams are now using AI tools to help them prepare for Ethics Bowl, raising important ethical questions about the role of AI in ethical decision-making.

On the one hand, using AI to analyze ethical cases could be seen as a positive development. AI tools can quickly identify relevant ethical principles and concepts, help teams to identify potential consequences of different courses of action, and provide guidance on how to approach difficult ethical decisions. This can be especially helpful for students who are new to the field of applied ethics and may not have as much experience thinking through complex ethical dilemmas.

However, there are also several potential ethical concerns associated with using AI to analyze ethical cases. One concern is that relying too heavily on AI could lead to a reduction in critical thinking skills. If students become overly reliant on AI tools to identify ethical principles and make decisions, they may be less likely to develop the skills needed to think through ethical dilemmas on their own. This could have negative implications for their future careers and their ability to make sound ethical decisions in a wide range of contexts.

Another concern is that AI may not always be able to accurately capture the nuances of ethical decision-making. Ethical decision-making often involves considering a wide range of factors, such as the consequences of different courses of action, the values and beliefs of different stakeholders, and the potential impact of decisions on different communities. AI may not be able to fully capture these nuances, which could lead to oversimplification of complex ethical dilemmas.

Additionally, there are concerns about the potential biases that may be present in AI tools. AI algorithms are only as unbiased as the data they are trained on, and if the data used to train the AI is biased, the AI may produce biased results. This could have negative implications for Ethics Bowl competitions, where unbiased analysis and decision-making are essential.

In conclusion, while using AI to analyze ethical cases may have some benefits, it also raises important ethical questions about the role of AI in ethical decision-making. As AI continues to develop and become more integrated into our lives, it will be important to carefully consider the ethical implications of its use in fields like applied ethics. Ultimately, the goal should be to use AI tools in a way that enhances our ability to make sound ethical decisions, while also preserving our ability to think critically and to recognize the nuances of complex ethical dilemmas.