AI Will Make Universities Obsolete

As an autodidact software developer I'm obviously opinionated about universities and schools. As a child I was bored to death when I attended class, because I would solve tasks in 50% of the time given, while being forced to sit restless without anything to do for the rest of class.
Problems we were given were boring, and posed few challenges to me. Tests we were given were mindless memorization exercises, and gave zero pointers to whether or not the student actually understood the issue.
Facts are that our schooling system is fundamentally unfair for the above reasons. Don't believe me? Realise most people with a university degree were not the smartest at school, but rather typically the people who were at the middle of the spectrum during elementary school years. To drive home the point realise the following ...
The average education level in Mensa Norway is 9 years of basic education
AI to the "rescue"
AI will inevitably render our entire school system obsolete. Our education system might be able to adapt, but I'm quite frankly not sure. Let me illustrate with an example. I asked Schwoogle to create a PhD thesis for me. Obviously it's not a complete thesis, but with some more prompt engineering, I am 100% confident in that I could have it create a thesis that would pass.
Schwoogle of course is based upon DuckDuckGo, web scraping, and OpenAI. This allows it to reach out to the web and find any information, for then to rewrite that information according to my instructions. This just so happens to be a more or less 100% accurate process description of probably 50% of every single PhD thesis written the last 20 years - Implying I could "cheat" myself to a PhD thesis using ChatGPT.
When any schmuck with an internet connection can get a PhD, what's a PhD worth?
Schwoogle's PhD thesis
Below is the PhD thesis Schwoogle wrote for me. It's obvously not complete, but with some prompt engineering I'm confident in that I could have it write an entire PhD thesis within a couple of hours that I could use to get myself an actual PhD.
Introduction
Large Language Models (LLMs) such as GPT (Generative Pretrained Transformer) and LLMs developed by OpenAI and Microsoft, have gained significant attention in recent years. These models have the ability to generate human-like text and have a wide range of applications, including chatbots, language translation, and content generation. This thesis aims to explore the workings of LLMs, focusing on GPT and its applications, as well as discussing the potential future developments and challenges in this field.
Chapter 1: Understanding LLMs
1.1 Overview of LLMs
- Definition and characteristics of LLMs
- Importance and applications of LLMs
1.2 Training Process of LLMs
- Data collection and preprocessing
- Neural network architecture (Transformer)
- Self-attention mechanism
- Training and fine-tuning
1.3 Limitations and Challenges of LLMs
- Bias and ethical concerns
- Lack of common sense reasoning
- Overreliance on training data
Chapter 2: GPT and its Applications
2.1 Introduction to GPT
- History and development of GPT
- Key features and improvements in GPT versions
2.2 Applications of GPT
- Chatbots and conversational agents
- Content generation and writing assistance
- Language translation and summarization
2.3 Case Studies and Examples
- Analysis of GPT's performance in different tasks
- Comparison with other LLMs and traditional NLP models
Chapter 3: Future Directions and Challenges
3.1 Advancements in LLMs
- Increasing model size and complexity
- Multimodal capabilities and integration with other technologies
- Transfer learning and domain adaptation
3.2 Ethical Considerations and Bias Mitigation
- Addressing bias in training data
- Ensuring transparency and accountability in LLMs
3.3 Challenges and Open Questions
- Improving interpretability and explainability
- Balancing creativity and control in text generation
- Addressing the limitations of LLMs in real-world applications
Thesis Conclusion
In conclusion, LLMs such as GPT have revolutionized the field of natural language processing and have the potential to transform various industries. However, there are still challenges and limitations that need to be addressed to ensure the responsible and ethical use of these models. Future research and development in LLMs should focus on improving interpretability, addressing bias, and enhancing the capabilities of these models to handle complex and nuanced tasks. With continued advancements, LLMs have the potential to revolutionize the way we interact with and generate text in the future.
Continuing filling in the blanks
To illustrate how to continue generating the thesis, I asked Schwoogle to "Provide me with the definition and characteristics of LLMs". This would be the means you'd use to have it continue generating your thesis. Below is what it returned.
Section 1.1
LLMs, or large language models, are AI-powered systems that are trained on vast amounts of text data to understand and generate human-like language. They use a combination of machine learning and human input to process and analyze text data.
The training process of LLMs involves training a neural network, specifically a transformer architecture, on large datasets of text. The transformer architecture allows the model to understand the relationships between words and phrases in a sentence and make predictions about what words should come next. LLMs can read and analyze text from various sources, such as websites, forums, and documents.
LLMs are designed to mimic human language and generate coherent and natural-sounding responses. They can be used for a wide range of tasks, including natural language understanding, text generation, translation, summarization, and more. The performance of LLMs is measured by the number of parameters they have, which indicates their size and complexity.
However, LLMs have limitations. They can sometimes produce generic or clichéd responses, as they rely on patterns in the training data. They may also make errors or provide inaccurate information, as they don't have true understanding or knowledge. LLMs require continuous training and feedback from human supervisors and users to improve their performance.
As LLMs continue to evolve, they are becoming more powerful and capable of handling complex language tasks. They can be fine-tuned for specific domains or modalities, such as incorporating visual or audio inputs. The development of LLMs is an ongoing area of research and innovation, with new models and improvements being introduced regularly.
Conclusion
The above is obviously not a complete PhD thesis. However, a human being will spend years on a PhD thesis. The Bogdanoff twins spent a decade on theirs, and some would argue it's a hot smoking pile of garbage. Still, they're in a position where they can demand from you to refer to them as "Doctors". I'm not going to beat on the Bogdanoffs here, they've taken enough beating, and I actually feel sorry for them - But I'm fairly confident that I could easily "cheat" myself to a PhD thesis in any subject in a couple of months myself.
Is it really cheating?
However, is it really cheating? Before academia can answer a resounding no to that question, I suspect academia is dead. Whether or not there exists life after death for academia is up to academia to choose. After all, we are a "tool using primate", and if we're not allowed to use tools at our most important tasks, we're really no more but hairless monkeys.
Unless academia can somehow come to peace with AI, GPT, and LLMs, I see no path forward for our education system - It's ipso facto dead! PhDs will inevitably turn into "expensive toilet paper" within a couple of years. Whether you enjoy that fact or not I suspect depends upon whether or not you're autodidact or not ... 😉
Below is a YouTube video where I demonstrate Schwoogle for those interested.