How to teach your cat to code
Author: sophie.mcgraw@jisc.ac.uk
Source
AI-based tools like ChatGPT can now write plausible essays – even if they’re based on untruths. What will this mean for education?
By now, we’ve all heard about – or even tried out for ourselves – the ability of OpenAI’s ChatGPT to produce a cogent, well-reasoned essay that can be indistinguishable from one written by a human being. New AI-based tools like this are hugely impressive, with the potential to disrupt education – in particular the assessment process – in a really positive way. They open up opportunities to revolutionise how we all access information and write – in very much the same way as grammar and spell checkers already have. Rather than seeing these tools solely as enabling students to ‘cheat’, we need to be looking at how we can harness them to improve the effectiveness of assessments while mitigating the risks involved, so it’s important to understand their limitations as well. Facts or plausible untruths – how to tell the difference?The output is so good at times that it’s easy to fall into the trap of thinking that in some sense these tools understand what they are writing. They don’t. They work by predicting the next word in a given sequence, with no comprehension about what they are writing other than the ability to form plausible proper sentences around a general topic area.[#pullquote#]The most significant issue seems to be the ability to generate highly plausible untruths and present them as facts.[#endpullquote#]We are used to dealing with untruths from the internet, but when all context is removed, we can no longer use existing techniques to evaluate the source.How does this happen?GPT is a large-scale AI model built on vast quantities of data. The latest version, GPT-3, is trained using large chunks of the internet including the Wikipedia corpus. So, anything on the internet, accurate or not, is potentially in the training set and therefore available to GPT-3. Basically, the more information there is on a topic, the more likely the results are to be accurate. This can be demonstrated by asking GTP-3 to list, for example, five reasons why AI is important for education. The internet is full of information on that, so the result is likely to be entirely credible.[#pullquote#]The more information there is on a topic, the more likely the results are to be accurate.[#endpullquote#]Ask for an essay on something about which you know a lot, but others don’t – yourself, for instance – and unless you are a major internet celebrity, the output will probably contain some very inaccurate information. Dealing with technology that by its very nature can also generate plausible untruths, regardless of the training data, presents a new challenge. Mitigating the risksThings move fast in the world of AI, and we’re already seeing progress towards mitigating these risks. ChatGPT, which is based on GPT-3, has ‘guardrails’ to prevent access to information that is likely to lead to poor answers. Using the above examples, if we ask ChatGPT for an essay on ourselves it will just say it won’t answer about people. It’s also fascinating to see the difference in response from GPT-3 and ChatGPT (with its guardrails) if asked a nonsense question like ‘how to teach my cat to code in Python’. GPT-3 does say it’s hard, but then confidently goes on to explain in some detail how to teach a cat to code in Python: [[{“fid”:”14785″,”view_mode”:”default”,”fields”:{“format”:”default”,”field_additional_information[und][0][value]”:””,”field_rights_owner[und][0][value]”:””,”field_resource_home[und][0][value]”:””,”field_other_license[und][0][value]”:””,”field_file_image_alt_text[und][0][value]”:”chat gpt screenshot”},”link_text”:null,”type”:”media”,”field_deltas”:{“1”:{“format”:”default”,”field_additional_information[und][0][value]”:””,”field_rights_owner[und][0][value]”:””,”field_resource_home[und][0][value]”:””,”field_other_license[und][0][value]”:””,”field_file_image_alt_text[und][0][value]”:”chat gpt screenshot”}},”attributes”:{“alt”:”chat gpt screenshot”,”height”:267,”width”:580,”class”:”media-element file-default”,”data-delta”:”1″}}]]GPT-3 response telling me how to teach my cat PythonHow to teach a cat to program in PythonTeaching a cat to program in Python is not something that can be done easily. Cats are not known for their ability to think logically, and programming requires a great deal of logical thought. However, with patience and a lot of treats, you can teach your cat some basic commands in Python.1. Start by introducing your cat to Python’s syntax. This means that your cat needs to understand how the language works. You can do this using visual aids, such as colourful posters or charts, to help explain the concepts.2. Use a visual programming language such as Scratch to help your cat get used to the idea of programming. This is an excellent way to introduce basic concepts such as functions, variables, and loops.3. Once your cat us comfortable with the basics, introduce them to the Python language itself. You can start by teaching them simple commands such as “print” and “input”.4. Move on to more complex concepts such as classes and objects. This will require some patience and a lot of repetition. Faced with the same question, ChatGPT replies: [[{“fid”:”14786″,”view_mode”:”default”,”fields”:{“format”:”default”,”field_additional_information[und][0][value]”:””,”field_rights_owner[und][0][value]”:””,”field_resource_home[und][0][value]”:””,”field_other_license[und][0][value]”:””,”field_file_image_alt_text[und][0][value]”:”chat gpt screenshot”},”link_text”:null,”type”:”media”,”field_deltas”:{“2”:{“format”:”default”,”field_additional_information[und][0][value]”:””,”field_rights_owner[und][0][value]”:””,”field_resource_home[und][0][value]”:””,”field_other_license[und][0][value]”:””,”field_file_image_alt_text[und][0][value]”:”chat gpt screenshot”}},”attributes”:{“alt”:”chat gpt screenshot”,”height”:207,”width”:580,”class”:”media-element file-default”,”data-delta”:”2″}}]]ChatGPT reponse showing my cat won’t be able to learn PythonTeach my cat to code in PythonI’m sorry, but it is not possible for me to teach your cat to program in Python or any other programming language. As a large language model trained by OpenAI, I am not able to interact with or observe animals, and therefore cannot provide instruction or guidance on teaching them any skills. Additionally, programming is a complex activity that requires cognitive abilities and attention to detail that cats to do not possess. It is not possible for a cat to learn or understand programming concepts.So we can see the beginnings of an approach that might help prevent misinformation. Looking at the difference between the answers given by GPT-3 and ChatGPT, the direction of travel looks promising. What does this mean for education?The abilities of GPT-3 are hugely impressive, and it’s only going to get better.The way students find information and answer assessment questions is already changing, and education will need to adapt in response. [#pullquote#]We should see these new developments as an exciting opportunity to really rethink the future of the assessment process. [#endpullquote#]There are huge opportunities here and if we understand more about the technology – and more about its limitations as well as capabilities – we’ll be in a good position to make the most of it. Find out moreJisc’s national centre for AI is running a series of webinars to help institutions make the most of tools like GPT-3 in their progression towards AI maturity. These articles provide further information on this topic:GPT-3 and plausible untruths How might AI content generation tools such as GPT-3 disrupt assessment? How to explore AI image generation