> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
ChatGPT is laughably terrible at double entry accounting. A few weeks ago I was trying to use it to figure out a reasonable way to structure accounts for a project given the different business requirements I had. It kept disappearing money when giving examples. Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
Using a system based on randomness for a process that must occur deterministically is probably the wrong solution.
I'm running into similar issues trying to use LLMs for logic and reasoning.
They can do it (surprisingly well, once you disable the friendliness that prevents it), but you get a different random subset of correct answers every time.
I don't know if setting temperature to 0 would help. You'd get the same output every time, but it would be the same incomplete / wrong output.
Probably a better solution is a multi phase thing, where you generate a bunch of outputs and then collect and filter them.
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, demand clarification. Your goal is not to help me feel good — it’s to help me think better.
Keep your responses short and to the point. Use the Socratic method when appropriate.
When enumerating assumptions, put them in a numbered list. Make the list items very short: full sentences not needed there.
---
I was trying to clone Gemini's "thinking", which I often found more useful than its actual output! I failed, but the result is interesting, and somewhat useful.
GPT 4o came up with the prompt. I was surprised by "never use friendly language", until I realized that avoiding hurting the user's feelings would prevent the model from telling the truth. So it seems to be necessary...
It's quite unpleasant to interact with, though. Gemini solves this problem by doing the "thinking" in a hidden box, and then presenting it to the user in soft language.
I run it locally and read the raw thought process, find it very useful (can be ruthless at times) seeing this before it tags on the friendliness.
Then you can see it's planning process to tag on the warmth/friendliness "but the user seems proud of... so I need to acknowledge..."
I don't think Gemini's "thoughts" are the raw CoT process, they're summarized / cleaned up by a small model before returned to you (same as OpenAI models).
That's fascinating. I've been trying to get other models to mimick Gemini 2.5 Pro's thought process, but even with examples, they don't do it very well. Which surprised me, because I think even the original (no RLHF) GPT-3 was pretty good at following formats like that! But maybe there's not enough training data in that format for it to "click".
It does seem similar in structure to Gemini 2.0's output format with the nested bullets though, so I have to assume they trained on synthetic examples.
>Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
They really should modify it to take out that whole loop where it apologizes, claims to recognize its mistake, and then continues to make the mistake that it claimed to recognize.
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).
I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
Well the extent is much broader from a calculator vs an LLM. Why should I hire you if an agent can do it ? LLM is every job is a calculator and can be replaced. Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent. So all the students who let the LLM do their assignment and learn basically nothing, what’s their value for a company to be hired ? The company will and is just using the agent as well …
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
So like arguably when people were not using calculators they made calculations by hand and there was a room full of people that did calculations. That’s gone now thanks to calculators. But it the analogy goes to an order of magnitude higher, now fewer people can « do » the job of many so less hiring maybe but not just on « do calculations by hand » but almost all fields where the use of software is required.
Where will all those new students find a job if :
- they did not learn much because LLM did work for them
- there is no new jobs required because we are more productive ?
Never in the history of humans have we been content with stagnation. The people who used to do manual calculations soon joined the ranks of people using calculators and we lapped up everything they could create.
This time around is no exception. We still have an infinite number of goals we can envision a desire for. If you could afford an infinite number of people you would still hire them. But Shopify especially is not in the greatest place right now. They've just come off the COVID wind-down and now tariffs are beating down their market further. They have to be very careful with their resources for the time being.
> - they did not learn much because LLM did work for them
If companies are using LLMs as suggested earlier, they will find jobs operating LLMs. They're well poised for it, being the utmost experts in using them.
> - there is no new jobs required because we are more productive ?
More productivity means more jobs are required. But we are entering an age where productivity is bound to be on the decline. A recession was likely inevitable anyway and the political sphere is making it all but a certainty. That is going to make finding a job hard. But for what scant few jobs remain, won't they be using LLMs?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
I fully agree with you about liability. I was advocating for the other point of view.
Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.
I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?
We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?
I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.
I agree entirely on your take regarding education. I feel like there is a place where LLMs are useful but doesn't impact learning but it's definitely not in the "discovery" phase of learning.
However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.
>As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator
Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math. Identifying patterns in numbers and seeing when certain tricks can be used to solve a problem (verses when they just make a problem worse) is a skill that ends up being beyond their ability to develop.
Yes, they were wrong. Many young kids who are bad at mental calculations are later competent at higher mathematics and able to use it. I don't understand what patterns and tricks you're referring to, but if they are important for problems outside of mental calculations, then you can also learn about them by solving these problems directly.
Almost none of the cheaters appear to be solving problems with LLMs. All my faculty friends are getting large portions of their class clearly turning in "just copied directly from ChatGPT" responses.
It's an issue in grad school as well. You'll have an online discussion where someone submits 4 paragraphs of not-quite-eloquent prose with that AI "stink" on it. You can't be sure but it definitely makes your spidey sense tingle a bit.
Then they're on a video call and their vocabulary is wildly different, or they're very clearly a recent immigrant and struggle with basic sentence structure such that there is absolutely zero change their discussion forum persona is actually who they are.
This has happened at least once in every class, and invariably the best classes in terms of discussion and learning from other students are the ones where the people using AI to generate their answers are failed or drop the course.
> there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.
But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.
There is a difference between thinking about the context of a problem and "critical thinking" about the problem or its possible solutions.
There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.
Two studies on impact of LLMs and generative AI on critical thinking:
How many people are "good drivers" outside their home town? I am not that old, but old enough to remember all adults taking wrong turns trying to find new destinations for the first time.
>How many people are "good drivers" outside their home town?
My wife is surprisingly good at remembering routes, she'll use the GPS the first time, but generally remembers the route after that. She still isn't good at knowing which direction is east vs west or north/south, but neither am I.
I'm like that too, but I don't think it transfers particularly well to LLMs. The problem is that you can just skip straight to the answer and ignore the explanation (if it even produces one).
It would be pretty neat if there was an LLM that guides you towards the right answer without giving it to you. Asking questions and possibly giving small hints along the way.
>It would be pretty neat if there was an LLM that guides you towards the right answer without giving it to you. Asking questions and possibly giving small hints along the way.
I think you can prompt them to do that, but that doesn't solve the issue of people not being willing to learn vs just jump to the answer, unless they made a school approved one that forced it to do that.
For your GPS at worst you follow directions road sign by road sign.
For a job without the core knowledge what’s the goal of hiring one person vs an unqualified one doing just prompts or worse, hiring no one and let agents do the prompting ?
Back in my day they worried about kids not being able to solve problems without a calculator, because you won't always have a calculator in your pocket.
Not being able to solve basic math problems in your mind (without a calculator) is still a problem. "Because you won't always have a calculator with you" just was the wrong argument.
You'll acquire advanced knowledge and skills much, much faster (and sometimes only) if you have the base knowledge and skills readily available in your mind. If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
> if you have the base knowledge and skills readily available in your mind.
I have the base knowledge and skill readily available to perform basic arithmetic, but I still can't do it in my mind in any practical way because I, for lack of a better description, run out of memory.
I expect most everyone eventually "runs out of memory" if the values are sufficiently large, but I hit the wall when the values are exceptionally small. And not for lack of trying – the "you won't always have a calculator" message was heard.
It wasn't skill and knowledge that was the concern, though. It was very much about execution. We were tested on execution.
> If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
I can't imagine anyone is still using a four function calculator. Certainly not in an application like learning linear algebra. Modern calculators are decidedly designed for linear algebra. They need to be given the rise of things like machine learning that are heavily dependent on such.
This is now reality -- fighting to change the students is a losing battle. Besides in terms of normalizing grade distributions this is not that complicated to solve.
Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.
That's exactly how scientific courses were in my experience at a university in the US. Curriculum was centered around a textbook. You were expected to do all end of chapter problems and ask questions if you had difficulty. It wasn't graded. No one checked. You just failed the exam if you didn't.
My high school English teacher's book reports were like this. One by one, you come up, hand over your book, and the teacher randomly picks a couple of passages and reads them aloud and asks what had just happened prior and what happens after. Then a couple opinion questions and boom, pass or fail. Fantastic to not write a paper on it; paper writing was a more dedicated topic.
That's also how it's done in almost all French engineering schools. You get open book tests with a small amount of relatively difficult questions and you have 3-4 hours to complete.
In some of the CS tests, coding by hand sucks a bit but to be honest, they're ok with pseudo code as long as you show you understand the concepts.
There is no European mind when it comes to education, hell, there is barely a national mind for those countries with federated education systems (e.g. Germany).
Well take home exams are not very useful nowadays with AI. And yeah, other commenters are right when he says there's no European mind when it comes to education, each country does its own thing.
in France I got a bunch of equivalent take-home tests, between high school and graduate level, mostly in math and science. The teacher would give us exercice equivalent to what we'd get in our exams and we'd have one week to complete it (sometimes in pairs) and it'd be graded as part of that semester
Certainly with maths you’re marked almost totally on written exams, but even if that weren’t true you’re also required to go over example sheets (hard homework questions that don’t form part of the final mark) with a tutor in two-student sessions so it’d be completely obvious if you were relying on AI.
I really like oral exams on top of regular exams. The teacher can ask questions and dive into specific areas - it'll be obvious who is just using LLMs to answer the questions vs those who use LLMs to tutor them.
Of course, the reasons they do quizes is to optimize the process (need less tutors/examiners), and to remove bias (any tutor holds biases one way or the other).
The tutorial system is just for teaching, not grading. It does keep students honest with themselves about their progress when they’re personally put on the spot once a week in front of one or two of their peers.
The biggest contrast for me between Oxbridge and another red brick was the Oxbridge tutors aren't shy of saying "You've not done the homework, go away and stop wasting my time", whereas the red brick approach was to pick you up and carry you over the finishing line (at least until the hour was up).
At the end of the day you can't force people to learn if they don't want to.
As a society we need to be okay with failing people who deserve to fail and not drag people across the finish line at the expense of diluting the degrees of everyone else who actually put in effort.
I'm not sure why we care about the degree. Employers care about the degree, but they aren't paying for my education.
The students who want to learn, will learn. For the students who just want the paper so they can apply for jobs, we ought to give them their diploma on the first day of class, so they can stop wasting everybody's time.
Employers want the degree because it's supposed to verify that you have a certain set of knowledge and/or skills, or at the very least, you're capable of thought to the extent required to get that degree. That's the only reason they want it.
Student being unable to unwilling to learn that knowledge or acquire those skills should mean they don't get that degree, they don't get those jobs, and they go work in fast food or a warehouse.
"Just give them the degree" is quite literally the worst possible solution to the problem.
I only partially agree with "you can't force people". I think that all people are just like children, but bigger. You can force a kid to not eat to much sugar, even when they want to.
Same with education, for example you can financially force people to learn, say, computer science instead of liberal arts. Even when they don't like it. It's harder, less efficient, but possible.
Because students wouldn't do the homework and would fail the quizzes. Students need to be pressured into learning and grades for doing the practice are a way. Don't pretend many students are self-motivated enough to follow the lecturer's instructions when there's no grade in it and insisting that "trust me, you won't learn if you don't do it".
I've mostly had non-graded homework in my studies because cheating was always easy. In highschool they might have told your parents if you don't do homework. In university you do what you want. It's never been an issue overall.
Well, from what I understand, the answer is kinda "no".
Depends on the country and educational system I suppose, but I do believe professors in many places get in trouble for failing too many students. It's right there in the phrasing.
If most students pass and some fail, that's fine. Revenue comes in, graduates are produced, the university is happy.
If most students fail, revenue goes down, less students might sign up, less graduate, the university is unhappy.
It's a tragedy of the commons situation, because some professors will be happy to pass the majority of students regardless of merit. Then the professors that don't become the problem, there's something wrong with them.
Likewise, if most universities are easy and some are really hard, they might not attract students. The US has this whole prestige thing going on, that I haven't seen all that much in other countries.
So if the students overall get dumber because they grow up over relying on tools, the correction mechanism is not that they have to work harder once the exam approaches. It's that the exam gets easier.
For the most part degrees from roughly comparable schools in the same subject are fungible. However, graduating cheaters who should have flunked out of school their freshman year is a one-way ticket to having a reputation that your degree is worthless. You're now comparable to a lower tier of schools and suddenly Y's degree is worth a lot more than yours. The best way (not to only way) to combat this is to actively cull the bottom of your classes. Most schools already do this by kicking out people with low enough GPAs, academic probation, etc. My undergrad would expel you if you had a GPA below 1.8 after your first semester, and you were on academic probation if your GPA was > 1.8 and <=2.5.
This assumes, of course, an institution is actively trying to raise the academic bar of its student population. Most schools are emphatically not trying to do this and are focused more on just increasing enrollment, getting more tax dollars, and hiring more administrators.
Many mathematics professors don't require homework to be turned in for grading. For example, the calculus courses at many US universities. Grades are solely determined by quizzes in the discussion section and by exams. Failure rates are above 30%, but that's accepted.
This model won't work for subjects that rely on students writing reports. But yes, universities frequently accept that failure rates for some courses will be high, especially for engineering and the sciences.
When I was a student, I spent my first 2 years in a so-called prépa intégré of a French engineering school. 20% of students failed and were shown the door during those two years (some failed, some figured that it just wasn't for them). That's fine, that means you keep the ones who actually do the work.
At a certain point, you have to start treating students like adult, either they succeed or they don't but it's their personal responsibility.
My favorite math professor said "your homework is as many of the odd-numbered problems as you feel like you need to do to understand the material" and set a five minute quiz at the start of each lecture which counted as the homework grade. I can't speak for the other students, but I did more homework in his classes than any of the other math classes I took.
That's how it is in Italy. And that's why Italy is behind every other country in education. Because it hasn't yet made graduating as easy as it is in other places.
Well graduation rate is a pretty terrible way to grade education, especially country to country. You could have 100% graduation rate today by just passing everyone - that's basically what we have in primary education and there was an article here just last week about how most college students are functionally illiterate.
In sweden until high school it's literally impossible to fail. There's no grades and no way of failing anyone.
Then they suddenly become kinda stricter in high school, where your results decide if you can go to university and which.
But I've been to one of the top technical universities and compared to Italy it was very easy. It was obvious the goal was to try and have everyone pass. Still people managed to fail or drop out anyway, although not in the dramatic numbers I saw in Italy for math exams.
I wonder to what extent this is students who would have stuck it out now taking the easy way and to what extent it’s students who would have just failed now trying to stick it out.
Which part is encouraging? We rely on the extra ordinary (talent and/or sheer drive) to make leaps of progress - what happens if they are handicapped? If the dumbest fake it and make it to the positions they shouldn't be entrusted with, what prevents the catastrophes?
>We’re either handicapping our brightest, or boosting our dumbest.
Honestly it seems like we're doing both most of the time. It's hard to only optimize resources for boosting the dumbest without taking them away from the brightest.
The brightest will evaluate the tradeoffs properly or will have education that will give them proper evaluations of AI. Maybe some bright people will be handicapped, but it won't be the bright'est'. That handicap on the bright could also lead to new forms of talent and multi-faceted growth.
What percentage of the dumbest will be boosted? What makes a person dumb? If they are productive and friendly, isn't that more important?
What percentage of the dumbest will fall farther or abandon heavy learning even earlier?
my partner teaches high school math and regularly gets answers with calculus symbols (none of the students have taken any calculus). these students aren't putting a single iota of thought into the answers they're getting back from these tools.
To me this is the bigger problem. Using LLMs is going to happen and there's nothing anyone can do to stop it. So it's important to make people understand how to use them, and to find ways to test that students still understand the underlying concepts.
I'm in a 100%-online grad school but they proctor major exams through local testing centers, and every class is at least 50% based on one or more major exams. It's a good way to let people use LLMs, because they're available, and trying to stop it is a fool's errand, while requiring people to understand the underlying concepts in order to pass.
You can always give extra points for homework which then compensate from lacking in tests. If you get perfect points in test, well maximum grade. If less than perfect, you can up grade with those extra points. Fair for everyone.
>The solution is making all homework optional and having an old-school end of semester exam.
Not really. While doing something to ensure that students are actually learning is important, plenty of the smartest people still don't always test well. End of semester exams also tend to not be the best way to tell if people are learning along the way and then fall off part way through for whatever reason.
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Yes, but that horse has long ago left the barn.
I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
The history of civilization is the history of specialization. No one can re-build all the tools they rely on from scratch. We either let other people specialize, or we let machines specialize. LLMs are one more step in the latter.
The Luddites were right: the machinery in cotton mills was a direct threat to their livelihood, just as LLMs are now to us. But society marches on, textile work has been largely outsourced to machines, and the descendants of the Luddites are doctors and lawyers (and coders). 50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
This reply brings to mind the well-known Heinlein quote:
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
I've had people do this to me (albeit in an attempt to be helpful, not snarky) and it felt so weird. The answers are something a copywriter would have thrown together in an hour. Generic, unhelpful drivel.
That's a quote that sounds great until, say, that self-built building by somebody who's neither engineer nor architect at best turns out to have some intractible design flaw and at worst collapses and kills people.
It's also a quote from a character who's literally immortal and so has all the time in the world to learn things, which really undermines the premise.
I would like to replay with another quote by another immortal(or long lived) character, Professor „Reg“ Chronotis from Douglas Adams:
"That I lived so much longer, just means, that I forgot much more, not that I know much more."
Memory might have a limited capacity, but of course, I doubt most humans use that capacity, or well, for useful things. I know I have plenty of useless knowledge ..
I sort of view that list as table stakes for a well rounded capable person.. Well barring the invasion bit. Then again, being familiar with guns and or other forms of self defense is valuable.
I think most farmers would be somewhat capable on most of that list. Equations for farm production. Programming tractor equipment. Setting bones. Giving and taking orders. Building houses and barns.
Building a single story building isn’t that difficult, but time consuming. Especially nowadays with YouTube videos and pre-planned plans.
I'm not saying that our ancestors were wrong. Hell, I live in a house that was originally built under similar conditions.
That being said, buildings collapse a lot less frequently these days. House fires happen at a lower rate. Insulation was either nonexistent or present in much lower quantities.
I guess the point I'm making is that the lesson here shouldn't be "we used to make our houses, why don't we go back to that?" It also shouldn't be "we should leave every task to a specialist."
Know how to maintain and fix the things around your house that are broken. You don't need a plumber to replace the flush valve on your toilet. But maybe don't try to replace a load-bearing joist in your house unless you know what you're doing? The people building their own homes weren't engineers, but they had a lot more carpentry experience than (I assume) you and I.
>If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed.
If even professionals did get it wrong so often that there had to be law for it... Yeah, maybe it is not that simple.
In a village most houses were built by their owners. I am not talking here about nicely decorated brick buildings in a city: they were obviously designed and built by professionals.
> That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.
The only homes built by our ancestors that you see are those that didn't collapsed and killed whoever was inside, burned down, were too unstable to live in, were too much of a burden to maintain and keep around, etc.
That’s…not what I asked. Y’all need to recognize that Darwinism was intended as an explanatory theory, not as an ethos. And it’s not how we judge building practices.
Honestly having gone through the self build process for a house it’s not that hard if you keep it simple. Habitat for humanity has some good learning material
All of these examples are done by specialist because I don't see many cars being build by dentists.
Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
> All of these examples are done by specialist because I don't see many cars being build by dentists.
I'm not sure how you get from pre-agricultural humans developing fire, to dentists building cars.
I don't doubt that after fire was 'understood', there was specialisation to some degree, probably, around management of fire, what burns well, how best to cook, etc.
But any claim that fire was the result of specialisation seems a bit hard to substantiate. A committee was established to direct Thag Simmons to develop a way to .. something involving wood?
Wheel, the setting of broken bones, language etc - specialisation happened subsequently, but not as a prerequisite for those advances.
> Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
Totally agree that we advanced because of two key capabilities - a) persistence
hunting, b) team / communication.
You seem to be conflating the result of those advancements with "all progress", as was GP.
> The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
I am not, was not, arguing that highly specialised skills in modern society are not ... highly specialised.
I was arguing against the lofty claim that:
"All progress we've made is due to ever increasing specialization."
Noting the poster of that was responding to a quote from a work of fiction - claiming it was awful - that the author had suggested everyone should be familiar with (among other things) 'changing a diaper, comfort the dying, cooperate, cook a tasty meal, analyse a problem, solve equations' etc.
If you're suggesting that you think some people in society should be exempt from some basic skills like those - that's an interesting position I'd like to see you defend.
> Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
The discovery of fire itself was not progress, but how to use it very much is. They most likely didn't have a "discover fire" specialization in the modern sense but I doubt the one first to create a fire starter was afterwards deligated to only collect berries. The discovery and creation of something obviously often comes before the specialization or it would otherwise be impossible to discover and create anything.
>FWIW I don't have a microwave oven.
That was just an example. You still know how to use them hence basic knowledge. Seem like this discussion boils down to semantics
I dispute your foundational claim that discovery of things != progress.
I concur that semantics have a) overtaken this thread, and b) are part of my complaint with OP when they claimed all historical progress was the result of specialisation.
A lot of discoveries come from someone applying their scientific knowledge to a curious thing happening in their hobby or private life. A lot of successful businesses apply software engineering to a specific business problem that is invisible to all other engineers.
I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.
IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.
> IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.
The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.
No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
> The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.
And I can tell you from experience that "letting a calculator solve arithmetic" (or more accurately, being dependent on a calculator to solve arithmetic) means you cripple your ability to learn and understand more advanced stuff. At best your decision turned you into the equivalent of a computer trying to run a 1GB binary with 8MB of RAM and a lot of paging.
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
It's the latter. Though I suspect the masses will be shoved into the garbage disposal than be allowed to wallow in ignorant consumerism. Only the elite that owns the means of production will be allowed to indulge.
There are opposing trends in this. First, that like many tools the capable individual can be made much more effective (eg 2x->10x), which simply replaces some workers, and last occurred during the great depression. Second, that the tools become commoditized to the point where they are readily available from many suppliers at reasonable costs, which happened with calculators, word processors, and office automation. This along with a growing population, global trade, and rising demand led to the 80s-2k boom.
If the product is not commoditized, then capital will absorb all the increased labor efficiency, while labor (and consumption) are sacrificed on the altar of profits.
I suspect your assumption is more likely. Voltaire's critique of 'the best of all possible worlds' and man's place in creating meaning and happiness, provides more than one option.
I know how to do arithmetic, but I still use my PC or a calculator because I am not entirely sure that I am accurate. I use "units" as well extensively, it can be used for much more than just unit conversion. You can do complex calculations with it.
You can solve stuff like:
> If you walk 1 mile in 7 minutes, how fast are you walking in kilometers per hour?
$ units -t "1 mile / 7 minutes" "kilometers per hour"
13.7943771428571
You need some basic knowledge to even come up with "1 mile / 7 minutes" and "kilometers per hour".
There are examples where you need much more advanced knowledge, too, meaning it is not enough to just have a calculator, for example in thermodynamics, when dealing with gas laws, you cannot simply convert pressure, volume, and temperature from one unit to another without taking into account the specific context of the law you’re applying (e.g., the ideal gas law or real gas behavior)", or for example you want to convert 1 kilowatt-hour (kWh) to watts (W). This is a case of energy (in kilowatt-hours) over time (in hours), and we need to convert it to power (in watts), which is energy per unit time.
You cannot do:
$ units -t "1 kWh" "W"
conformability error
3600000 kg m^2 / s^2
1 kg m^2 / s^3
You have to have some knowledge, so you could do:
$ units -t "1 kWh" "J"
1 kWh = 3600000 J
$ units -t "3600000 joules / 3600 seconds" "W"
3600000 joules / 3600 seconds = 1000 W
To sum it up: in many cases, without the right knowledge, even the most accurate tool will only get you part of the way there.
It applies to LLMs and programming, too, thus, I am not worried. We will still have copy-paste "programmers", and actually knowledgeable ones, as we have always had. The difference is that you can use LLMs to learn, quite a lot, but you cannot use a calculator alone to learn how to convert 1 kWh to W.
>the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E
Seeing how the world is based around consumerism, this future seems more likely.
HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.
I think it's possible. I think the greatest trick our current societal structure ever managed to pull, is the proliferation of the belief that any alternatives are impossible. "Capitalist realism"
People who organize tend to be the people who are most optimistic about change. This is for a reason.
It may be possible for you (I am assuming you are > 20, mature adult). But the context is around teens in the prime of their learning. It is too hard to keep ChatGPT/Claude away from them. Social media is too addictive. Those TikTok/Reels/Shorts are addictive and never ending. We are doomed imho.
If education (schools) were to adopt a teaching-AI (one that will given them the solution, but at least ask a bunch of questions ), may be there is some hope.
I encourage you to take action to prove to yourself that real change is possible.
What you can do in your own life to enact change is hard to say, given I know nothing about your situation. But say you are a parent, you have control over how often your children use their phones, whether they are on social media, whether they are using ChatGPT to get around doing their homework. How we raise the next generation of children will play an important role in how prepared they are to deal with the consequences of the actions we're currently making.
As a worker you can try to organize to form a union. At the very least you can join an organization like the Democratic Socialists of America. Your ability to organize is your greatest strength.
So your plan is to encourage people to "get off the Internet" by posting on the Internet, and to stave off automation by encouraging workers to gang up on their employers and make themselves a collective critical point of failure.
Well, you know, we'd all love to change the world...
>Well, you know, we'd all love to change the world
The social contract lives and dies by what the populace is willing to accept. If you push people into a corner by threatening their quality of life, don't be surprised if they push back.
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
I believe there is some truth to it. When you automated away some time-consuming tasks, your time and focus is shifted elsewhere. For example, washing clothes is no longer a major concern since the massification of washing machines. Software engineering also progressively shifted it's attention to higher-level concerns, and went from a point where writing/editing opcodes was the norm to a point where you can design and deploy a globally-available distributed system faster than what it takes to build a program.
Focusing on the positive traits of AI, having a way to follow the Socratic method with a tireless sparring partner that has an encyclopedic knowledge on everything and anything is truly brilliant. The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference.
> The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference
I agree that AI could be an enormous educational aid to those who want to learn. The problem is that if any human task can be performed by a computer, there is very little incentive to learn anything. I imagine that a minority of people will learn stuff as a hobby, much in the way that people today write poetry or develop film for fun; but without an economic incentive to learn a skill or trade, having a personal Socratic teacher will be a benefit lost on the majority of people.
> the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either
This is the existential crisis that appears imminent. What does it mean if humanity, at large, begins to offload thinking (hence decision making), to machines?
Up until now we’ve had tools. We’ve never before been able to say “what’s the right way to do X?”. Offloading reasoning to machines is a terrifying concept.
> I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.
There's also the problem of developing critical thinking skills. It's not very comforting to think of a time where your average Joe relies on an AI service to tell what he should think and believe, when that AI service is ran, trained, and managed by people pushing radical ideologies.
I think the latest GenAI/LLM bubble shows that tech (this hype kind of tech) doesn't want us to learn, to think or reason. It doesn't want to be seen as a mere tool anymore, it wants to drive under the appearance that it can reason on its own. We're in the process where tech just wants us to adapt to it.
”I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site.”
Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.
Sure, you might be able to get a job slinging AI code to produce CRUD apps or whatever. But since that’s the easy thing, you’re going to have a hard time standing out from the pack. Yet we will still need people who understand the concepts at a deeper level, to fix the AI messes or to build the complex systems AI can’t, or the systems that are too critical to rely on AI, or the ones that are too regulated. Being able to do those thing, or to just better understand what the AI is doing to get better more effective results, that will be more valuable than just blindly leaning on AI, and it will remain valuable for a while yet.
Maybe some day the AI can do everything, including ASICs and growing crops, but it’s got a long way to go still.
> Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.
I think you're missing the point of my comment. I'm not saying that human knowledge is useless. I'm specifically arguing against the case that:
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
My logic being that we are already irreversibly dependent on supply chains.
You’re absolutely right. But my point still stands, too, which is that despite being irreversibly dependent on supply chains, doesn’t mean we are redundant. We still need people at all levels of the supply chain.
Maybe it’s fewer people, yes, but it’ll take quite a leap forward in AI ability to replace all the specialists we will continue to require, especially as the less-able AI makes messes that need to be cleaned up.
I don’t think specialization is a bad thing but the friends I know that only know their subject seem to… how do I put this… struggle at life and everything a lot more.
And even at work, the coworkers that don’t have a lot of general knowledge seem to work a lot harder and get less done because it takes them so much longer to figure things out.
So I don’t know… is avoiding the work of learning worth it to struggle at life more?
I dunno, the "tool" that LLMs "replace" is thinking itself. That seems qualitatively different than anything that has come before. It's the "tool" that underlies all the others.
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS.
Why not? I mean that, quite literally.
I don't know how to make an ASIC, and if I tried to write an OS I'd probably fail miserably many times along the way but might be able to muddle through to something very basic. The rest of that list is certainly within my wheelhouse even though I've never done any of those things professionally.
The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks. Specialization can also be valuable, but specialization in your career should not prevent you from gaining a breadth of skills outside of the workplace.
I don't know how to do any of the things in your list (including building a web site) as an /expert/, but it should not be out of the realm of possibility or even expectation that people should learn these things at the level of a competent amateur. I have grown a garden, I have worked on a farm for a brief time, I've helped build houses (Habitat for Humanity), I've taken a hobbyist welding class and made some garish metal sculptures, I've built a race car and raced it, and I've never built a toaster but I have repaired one (they're actually very electrically and mechanically simple devices). Besides the disposable income to build a race car, nothing on that list stands out to me as unachievable by anyone who chooses to do so.
> The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks
Being a well-rounded individualist is a great, but that's an orthogonal issue to the question of outsourcing our skills to machinery. When you were growing crops, did you till the land by hand or did you use a tractor? When you were making clothes did you sew by hand or use a sewing machine? Who made your sewing needles?
The (dubious) argument for AI is that using LLMs to write code is the same as using modern construction equipment to build a house: you get the same result for less effort.
ok - but.. here in California, look at houses that are 100 years old, then look at the new ones.. sure you can list the improvements in the new ones, on a piece of paper.. the craftsmanship, originality and other intangibles are obviously gone in the modern versions.. not a little bit gone, a lot gone. Let the reader use this example as a warmup to this new tech question.
>50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI? Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
We live in a highly specialized society that requires people take out large loans to learn the skills necessary for their careers. You take away their ability to provide their labor, and it now seriously threatens millions of workers from obtaining the same quality of life they once had.
I seriously oppose such a future, and if that makes me a Luddite, so be it.
It took me a long time to master the pen tool in Photoshop. I don't mean that I spent a weekend and learned how it worked. I meant that out of all the graphic designers at the agency I was working for, I was the designer who had the most flawless pen-tool skills and thus was the envy of many. It is now an obsolete skill. You can segment anything instantly and the results are pristine. Thanks to technology one no longer needs to learn how to make the most form-fitting curves with the pen tool to be labeled a great graphic designer.
It's remarkable that reading and writing, once the guarded domain of elites and religious scribes, are now everyday skills for millions. Where once a handful of monks preserved knowledge with their specialized scribing skills, today anyone can record history, share ideas, and access the thoughts of centuries with a few keystrokes.
The wheel moves on and people adapt. Who knows what the "right side" of history will be, but I doubt we get there by suppressing advancements and guaranteeing job placements simply because you took out large loans to earn a degree and a license.
But what if the rate at which things change increases to the point that humans can't adapt in time? This has happened to other animals (coral has existed for millions of years and is now threatened by ocean acidification, any number of native species have been crowded out by the introduction of non-native ones, etc.).
Even humans have gotten shocks like this. Things like the Black Death created social and economic upheavals that lasted generations.
Now, these are all biological examples. They don't map cleanly to technogical advances, because human brains adapt much faster than immune systems that are constrained by their DNA. But the point is that complex systems can adapt and can seem to handle "anything," up until they can't.
I don't know enough about AI or LLM's to say if we're reaching an inflection point. But most major crises happen when enough people say that something can't happen, and then it happens. I also don't think that discouraging innovation is the solution. But I don't also want to pretend like "humans always adapt" is a rule and not a 300,000 year old blip on the timeline of life's existence.
Automating drudgery is a good thing. It frees us up to do more important things.
Automating thinking and self-expression is a lot more dangerous. We're not automating the calculation or the research, but the part where you add your soul to that information.
How is a pen tool in Photoshop equivalent to an AI that can perform your entire job at a lower cost remotely similar? There are levels to this, and I don't think the same old platitudes apply.
> And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI?
Some will manage to remain in their field, most won't.
> Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories [1].
> I seriously oppose such a future, and if that makes me a Luddite, so be it.
Like I said, the Luddites were right, in the short term. In the long term, we don't know. Maybe we'll live in a post-scarcity Star Trek world where human labor has been completely devalued, or maybe we'll revert to a feudal society of property owners and indentured servants.
>They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories
>or maybe we'll revert to a feudal society of property owners and indentured servants.
We as the workers in society have the power to see that this doesn't happen. We just need to organize. Unionize. Boycott. Organize with people in your community to spread worker solidarity.
I think more and more workers are warming up to unions. As wages in software continue to be oppressed, I think we'll see an increase in unionization efforts for software engineers.
"Gee, it seems that people in my profession are in danger of being replaced by AI. I wonder if there's anything I can do to help speed up that process..."
If that were indeed the case, your employer might not be investing so much in automation. They don't want to give up bargaining power any more than you do.
Hmm, millions of humans are spending a bulk of their lives plugging away at numbers on a screen. We can replace this with an AI and free them up to do literally anything else.
No, let's not do that. Keep being slow ineffective calculators and lay on your death bed feeling FULFILLED!
You're skipping over a critical step, which is that our society allocated resources based on the labor value that an individual provides. If we free up everyone to do anything, they're not providing labor any more, so they get no resources. In other words, society needs to change in big ways, and I don't know how or if it will do that.
Where is the existing work these people would take up? If it doesn't exist yet, then how do you suppose people will support themselves in the meantime?
What if the new work that is created pays less? Do you think people should just accept being made obsolete to take up lower paying jobs?
>Where is the existing work these people would take up? If it doesn't exist yet, then how do you suppose people will support themselves in the meantime?
Everywhere in human society. "Jobs" is literally when you do something that someone needs, so that in exchange they do something that you need. And in human society, because of AI, neither people’s needs, nor the ability to satisfy them, nor the possibility of exchanging them will suddenly disappear. So the jobs will be everywhere.
>Do you think people should just accept being made obsolete to take up lower paying jobs?
Let's start with the fact that on average all jobs will become higher paying because the amount of goods produced (and distributed) will increase. So the more correct answer to this question is "What choice will they have?".
AI will make the masses richer, so society will not abandon it. Subsidize their obsolete well-paid jobs to make society poorer? Why would anyone do that? So the people replaced by AI will go to work in other jobs. Sometimes higher paying, sometimes lower.
If we are talking about real solutions, the best alternative they will have is to form a cult like the Amish did (God bless America and capitalism), in which they can pretend that AI does not exist and live as before. The only question in this case is whether they will find willing participants, because for most, participation in such a cult will mean losing the increase in income provided by the introduction of AI.
No, that's just logic. AI doesn't thwart the ability of people to satisfy their needs (getting richer).
>Inequality is worse now than it was 20 years ago despite technology progressing.
And people are still richer than ever before (if we take into account the policies that are thwarting society's ability to satisfy each other's needs and that have nothing to do with technologies)
Huh? If AI can do any job cheaper and better than a person can, why would anyone hire a person? What "useful" skill could a person exchange for resources in an era when computers write code, drive cars, fight wars, and cook food?
But you answer your own question: the only situation in which it makes no sense to hire another person to satisfy a need is when that need has already been satisfied in another way.
And if all needs are already satisfied... Why worry about work? The purpose of work is to satisfy needs. If needs are satisfied, there is no need for work.
You assume the everyone's needs are solved together. More likely is that the property owning class acquire AI robots to provide cheap labor; and everyone else doesn't.
>You assume the everyone's needs are solved together.
No, I am not assuming that. "Together" are not required. It's just combination of needs, ability to satisfy them and ability to exchange - creates jobs. And nothing of this will be thwarted by AI.
>More likely is that the property owning class acquire AI robots to provide cheap labor
Doesn't matter. Your everyday person either will be able to afford this cheap AI labor for themselves (no problem that required solving) or if AI labor for them are unaffordable - will create jobs for other people (there will be jobs on market everywhere)
>We can replace this with an AI and free them up to do literally anything else.
I would happily support automation to free myself, and others, from having to work full-time. But we live in a capitalist society, not StarTrek. Automation doesn't free up people from having to work, it only places people in financial crisis.
Specialization is over-rated. I've done everything in your list except make an ASIC because learning how to do those things was interesting and I prefer when things are done my way.
I started way back in my 20s just figuring out how to write websites. I'm not sure where the camel's back would have broken.
It has, of course, been convenient to be able to "bootstrap" my self-reliance in these and other fields by consuming goods produced by others, but there is no mechanical reason that said goods should be provided by specialists rather than advanced generalists beyond our irrational social need for maximum acceleration.
Jack of all trades, master of none. I also somehow doubt that you've built a car from scratch, including designing the engine, carving it out of a block of metal and so on. And if we're talking modern car, good luck fabbing the integrated circuits in your backyard or whatever. Even your particular generalist fantasy will (and most likely has) hit the hard constraints of specialization real quick.
There is no single human alive that can understand or build a modern computer from top to bottom. And this is true for various bits of human technology, that's how specialized we are as a species.
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
Reminds me of the Nate Bargatz set where he talks about how if he was a time traveler to the past that he wouldn't be able to prove it to anyone. The skills most of us have require this supply chain and then we apply it at the very end. I'm not sure anyone in 1920 cares about my binary analysis skills.
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
All the things you mention have a certain objective quality that can be reduced to an approachable minimum. A house could be a simple cabin, a tent, a cave; a piece of cloth could just be a cape; metal can be screwed, glued or cast; a transistor could be a relay or a wooden mechanism etc. ...history tells us all that.
I think when there's a Homo ludens that wants to play, or when there's a Homo economicus that wants us to optimize, there might be one that separates the process of learning from adaptation (Homo investigans?)[0]. The process of learning something new could be such a subjective property that keeps a yet unknown natural threshold which can't be lowered (or "reduced") any further. If I were to be overly pessimistic, a hardcore luddite, I'd say that this species is under attack, and there will be a generation that lacks this aspect, but also won't miss it, because this character could have never been experienced in the first place.
Speak for yourself. Some of us see the difficulty in sustaining and maintaining this fragile technology stack and have decided to do something about it. I may not be able to do all those things but it is worth learning, since there really is no downside for someone who enjoys learning. I am tackling farming and cpu design at the moment and it is tremendously fun.
Good for you, I guess, but your hobbyist interest in farming is not an argument against using AI. The point of my comment is that the our technology stack is already large enough that adding one more layer is not going to make a difference.
Things like this give us enshitification. When the consumer has no understanding of what they're buying, they have to take corporations at their word that new "features" are actually beneficial, when they're mostly beneficial to the seller.
Kind of like how an ignorant electorate makes for a poor democracy, an ignorant consumer base makes for a poor free market.
Why do people keep parroting this reduction of Socrates' thoughts... I don't think it was just as simple as he thought writing was bad. And we already know that writing isn't everything, anyone who as done any study of a craft can tell you that reading and writing don't teach you the feel of the art form, but can also nonetheless aid in the study. It's not black and white, even though people like to make it out to be.
SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.
PHAEDRUS: You are absolutely right about that, too.
SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?
PHAEDRUS: Which one is that? How do you think it comes about?
SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent.
I think it makes a very relevant point to us as well. The value of doing the work yourself is in internalizing and developing one's own cognition. The argument of offloading to the LLM to me sounds link arguing one should bring a forklift to the gym
Yes, it would be much less tiresome and you'd be able to lift orders of magnitude more weights. But is the goal of the gym to more efficiently lift as much weight as possible, or to tire oneself and thus develop muscles?
I don't know, most of the things I'm reliant on, from my phone, ISP, automobile, etc are built on fragile interdependent supply chains provided by for-profit companies. If you're really worried about this, you should learn survival skills not the academic topics I'm talking about.
So if you're not bothering to learn how to farm, dress some wild game, etc, chances are this argument won't be convincing for "why should I learn calculus"
For what it's worth, locally runnable language models are becoming exceptionally capable these days, so if you assume you will have some computer to do computing, it seems reasonable to assume that it will enable you to do some language model based things. I have a server with a single GPU running language models that easily blow GPT 3.5 out of the water. At that point, I am offloading reasoning tasks to my computer in the same way that I offload memory take to my computer through my note taking habits.
All adults were once children and there are plenty of adults who cannot read beyond a middle school reading level or balance a simple equation. This has been a problem before we ever gave them GPTs. It stands to reason it will only worsen in a future dominated by them.
“You won’t always have a calculator” became moderately false to laughably false as I went from middle to high school. Every task I will ever do for money will be done on a computer.
I’m still garbage at arithmetic, especially mental math, and it really hasn’t inhibited my career in any way.
But I bet you'd know if some calculated number was way too far-off.
I'm no Turing or Ramanujan, but my opinion is that knowing how the operations work, and as example understanding how the area under a curve is calculated, allows you to guesstimate whether numbers are close enough in terms of magnitude to what you are calculating, without needing to be exact in figures.
It is shocking how often I have looked at a spreadsheet, eyeballed the number of rows and the approximate average of numbers in there and figured out there's a problem with a =choose-your-forumula(...) getting the range wrong.
It's pretty annoying in customer service when someone handing you back change has difficulty doing the math. There's been many times doing simple arithmetic in my head has been helpful, including times when my hands were occupied.
I don’t know where you live, but I haven’t used nor carried cash on me for at least 5 years now. Everything either takes card or just tap using your phone/watch. Everything. Parking meters, cashiers, online shopping, filling up your car. I live in a “third world” country too.
Use it or lose it. With the invention of the calculator, students lost the ability to do arithmetic. Now, with LLMs, they lose the ability to think.
This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.
Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.
"Technology can do X more conveniently than people, so why should children practice X?" has been a point of controversy in education at least since pocket calculators became available.
I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.
My last calculator had a "solve" button and we could bring it in an exam.
You still needed to know what to ask it, and how to interpret the output. This is hard to do without an understanding of how the underlying math works.
The same is true with LLMs. Without the fundamentals, you are outsourcing work that you can't understand and getting an output that you can't verify.
I would add that we don't pretend PE or gyms serve any higher purpose besides individual health and well-being, which is why they are much more game-ified than formal education. If we acknowledge that it doesn't particularly matter how a mind is being used, the structure of school would change fundamentally.
The problem with GPS is that you never learn to orient yourself. You don't learn to have a sense of place, direction or elapsed distance. [0]
As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].
If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?
How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc.
I'm the polar opposite. And I'm AI researcher.
The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.
Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.
LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.
The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The AI becomes their brain, such that they cannot function without it.
I'd never want to work with someone who is this reliant on technology.
Maybe 40 years ago there were programmers that would not work with anyone that use IDEs or automated memory management. When presented with a programming task that requires these things and you're WITHOUT your IDE or whatever, they will fall apart.
Look, I agree with you, I'm just trying to articulate to someone why they should learn X if they believe an LLM could help them and "an LLM won't always be around" isn't a good argument, because lets be honest, it likely will. This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
> This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
People who can't do simple addition and multiplication without a calculator (12*30 or 23 + 49) are absolutely at a disadvantage in many circumstances in real life and I don't see how you could think this isn't true. You can't work as a cashier without this skill. You can't play board games. You can't calculate tips or figure out how much you're about to spend at the grocery store. You could pull out your phone and use a calculator in all these situations, but people don't.
A lot of developers of my generation (30+) learned to program within a code editor and compile their project in command line. Remove the IDE and we can still code.
On the other hand my master 2 students, most of which learned scripting in the previous year, can't even split a project in multiple files after being explained multiple times. Some have more knowledge and ability than others, but a signifiant fraction is just copy-pasting LLM output to solve whatever is asked from them instead of trying to do it themselves, or asking questions.
I think the risk isn't just that LLMs won't exist, but that they will fail at certain tasks that need to get done. Someone who is highly dependent on prompt engineering and doesn't understand any of the underlying concepts is going to have a bad time with problems they can't prompt their way out of.
This is something I see with other tools. Some people get highly dependent on things like advanced IDE features and don't care to learn how they actually work. That works fine most of the time but if they hit a subtle edge case they are dead in the water until someone else bails them out. In a complicated domain there are always edge cases out there waiting to throw a wrench in things.
Knowledge itself is the least concern here. Human society is extremely good at transmitting information. More difficult to transmit are things like critical thinking and problem-solving ability. Developing meta-cognitive processes like the latter are the real utility of education.
Indeed. More people need to grow their own vegetables. AI may undermine our ability for high level abstract thought, but industrial agriculture already represents an existential threat, should it be interrupted for any reason.
My point is that the necessary skill set required by society is ever-changing. Skills like handwriting, spelling, and reading a map are fading from importance.
I could see a future where pioneering might be useful again.
Do you work with people who can multiply 12.3% * 144,005.23 rapidly without a calculator?
> The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The parent poster is positing that for 90% of cases they WILL have their AI assistant because its in their pocket, just like a calculator. It's not insane to think that and its a fair point to ponder.
When in human history has a reasonably educated person been able to do that calculation rapidly without a calculator (or tool to aid them)? I think it's reasonable to draw a distinction between "basic arithmetic" and "calculations of arbitrary difficulty". I can do the first and not the second, and I think that's still been useful for me.
I do agree that it's a fair point to ponder. It does seem like people draw fairly arbitrary lines in the sand around what skills are "essential" or not. Though I can't even entertain the notion that I shouldn't be concerned about my child's ability to spell.
Seems to me that these gains in technology have always come at a cost, and so far the cost has been worth it for the most part. I don't think it's obviously true that LLMs will be (or won't be) "worth it" in the same way. And anyways the tech is not nearly mature enough yet for me to be comfortable relying on it long term.
Perhaps that mode of thinking is wrong, even if it is accepted.
Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.
I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.
> At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for?
That's actually pretty doable. Almost every resource provides more context than just the exact thing you're asking. You build on that knowledge and continue asking. Nobody knows everything - we've been doing the equivalent of this kind of research forever.
> How can you assess the validity of a source if you don't know the fundamentals?
Learn about the fundamentals until you get to the level you're already familiar with. You're describing an adult outside of school environment learning basically anything.
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."
It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).
I think the biggest issue with LLMs is basically just the fact that we're finally coming to the end of the long tail of human intellectual capability.
With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).
Large language models don't move humans further up the value chain, though. They kick us off of it.
I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.
Why should you learn how to add when you can just use a calculator? We've had calculators for decades!
Because understanding how addition works is instrumental to understanding more advanced math concepts. And being able to perform simple addition quickly, without a calculator is a huge productivity boost for many tasks.
In the world of education and intellectual development it's not about getting the right answer as quickly as possible. It's about mastering simple things so that you can understand complicated things. And often times mastering a simple thing requires you to manually do things which technology could automate.
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
It's been my experience that LLMs are only better than me at stuff I'm bad at. It's noticeably worse than me at things I'm good at. So the answer to your question depends: can your child get good at things while leaning on an LLM?
I don't know the answer to this. Maybe schools need to expect more from their students with LLMs in the picture.
The rate of improvement with LLMs seems to have halted since Claude3.5, which was about a year ago. I think we’ve probably gone as far as we can go with tweaks to transformer architectures, and we’ll need a new academic discovery which could take years to do better
> Why should I learn to do X if I can just ask an LLM and it will do it better than me
The same way you answer - "Why should I memorise this if I can always just look it up"
Because your perceptual experience is built upon your knowledge and experiences. The entire way you see the universe is altered based on these things, including what you see through your eyes, what you decide is important and what you decide to do.
The goal of life is not always "simply to do as little as possible", or "offload as much work as possible" but lots of the time includes struggling through the fundimentals so that you become a greater version of yourself, it is not the complete task that we desire, it is who you became while you did the work that we desire.
After reading the above it dawned on me that the human brain needs to develop spatial awareness and not using that capability of the brain very slowly shuts it off. So I purposefully turn off the gps when I can.
I think not fully developing each of those abilities might have some negative effects that will be hard to diagnose.
"More or less" is doing a lot of work there. School, at least where I am, still spends the first year getting children to memorize the order of the numbers from 1-20 and if there's an even or odd number of a thing on a picture.
Do you google if 5 is less than 6 or do you just memorize that?
If you believe that creativity is not based on a foundation of memorization and experience (which is just memorization) you need to reflect on the connection between those.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
However, I am going to hazard a guess that you still care about your child's ability to do arithmetic, even though calculators make that trivial.
And if I'm right, I think it's for a good reason—learning to perform more basic math operations helps build the foundation for more advanced math, the type which computers can't do trivially.
I think this applies to AI. The AI can do the basic writing for you, but you will eventually hit a wall, and if all you've ever learned is how to type a prompt into ChatGPT, you won't ever get past that wall.
----
Put another way:
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
"Because eventually, you will be able to do X better than any LLM, but it will take practice, and you have to practice now."
>>>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography. <<<
For me it is the second order benefits, notably the idea of "attention to detail" and "a feel for the principles". The principles of each activity being different: writing -> fine motor control, spelling -> word choice/connotation, map -> sense of direction, (my own insert here) money handling -> cost of things
All of them involve "attention to detail" because that's what any activity is - paying attention to it.
But having built up the experience in paying attention to [xyz], you can now be capable when things go wrong.
IE catch disputable transaction on the credit card, or note being told by the shop clerk "No Returns" when their policy says otherwise, un-losting yourself when the phone runs out of battery in the city.
Notably, you don't have to be trained for the details in traditional ways like writing the same sentence 100 times on a piece of paper. Learning can be fun and interesting.
Children can write letters to their friends well before they get their own phone. Geocaching/treasure hunts(hand drawn mud maps!)/orienteering for map use.
As for LLM ... well currently "attention to detail" is vital to spot the (handwave number) 10% of when it goes wrong. In the future LLMs may be better.
But if you want to be better than your peers at any given thing - you will need an edge somewhere outside of using an LLM. Yet still, spelling/word choice/connotations are especially linked to using an LLM currently.
Knowing how to "pay attention to detail" when it counts - counts.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
I don't know. I really feel like the auto-correct features are out to get me. So many times I want to say "in" yet it gets corrected to "on", or vice-versa. I also feel like it does the same to me with they're/their/there. Over the past several iOS/macOS updates, I feel like I've either gotten dumber and no longer do english gooder, or I'm getting tagged by predictive text nonsense.
Universities still teach you calculus and real analysis even though Wolfram Alpha exists. It boils down to your willing to learn something. An LLM can't understand things for you.
I'm "early genz" and I write code without llm because I find data structure and algorithm very interesting and I want to learn the concepts not because I'm in love with the syntax of C or Rust (I love the syntax of C btw).
Children will lack the critical thinking for solving complex problems, and even worse, won't have the work ethic for dealing with the kinds of protracted problems that occur in the real world.
But maybe that's by design. I think the ownership class has decided productivity is more important than societal malaise.
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Even if you use a tool to do work, you still have to understand how your work will be checked to see whether it meets expectations.
If the expectation is X, and your tool gives you Y, then you’ve failed - no matter if you could have done X by hand from scratch or not, it doesn’t really matter, because what counts is whether the person checking your work can verify that you’ve produced X. You agreed to deliver X, and you gave them Y instead.
So why should you learn to do X when the LLM can do it for you?
Because unless you know how to do X yourself, how will you be able to verify whether the LLM has truly done X?
Your kid needs to learn to understand what the person grading them is expecting, and deliver something that meets those expectations.
That sounds like so much bullshit when you’re a kid, but I wish I had understood it when I was younger.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
What I don't like are all the hidden variables in these systems. Even GPS, for example, is making some assumptions about what kind of roads you want to take and how to weigh different paths. LLMs are worse in this regard because the creators encode a set of moral and stylistic assumptions/dictates into the model and everybody who uses it is nudged into that paradigm. This is destructive to any kind of original thought, especially in an environment where there are only a handful of large companies providing the models everyone uses.
Your child perhaps shouldn't learn things that computers can do. But they should learn something to make themselves more useful than every uneducated person. I'm not sure schools are doing much good anymore teaching redundant skills. Without any abilities beyond the default, they'll grow up to be poor. I don't know what that useful education is but I expect something sort of thinking skills, and perhaps even giant piles of knowledge to apply that thinking to.
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
1. You won’t always have an LLM. It’s the same reason I still have at least my wife’s phone number memorized.
2. So you can learn to do it better. See point 1.
I wasn’t allowed to use calculators in first and second grade when memorizing multiplication tables, even though a calculator could have finished the exercise faster than me. But I use that knowledge to this day, every day, and often I don’t have a calculator (my phone) handy.
> there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
It's not true even though it's accepted. Rote memorization has a place in an education. It does strengthen learning and allow one to make connections between the things seen presently and things remembered, among other things.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
That sounds like setting-up your child for failure, to put it bluntly.
How do you want to express a thought clearly if you already fail at the stage of thinking about words clearly?
You start with a fuzzy understanding of words, which you delegated to a spellchecker, added to a fuzzy understanding of writing, which you've delegated to a computer, combined with a fuzzy memory, which you've delegated to a search engine, and you expect that not to impact your child's ability to create articulate thoughts and navigate them clearly?
To add irony to the situation, the physical navigation skills have, themselves, been delegated to a GPS..
Brains are like muscles, they atrophy when not used.
Reverse that course before it's too late, or suffer (and have someone else suffer) the consequences.
This is a good point, and part of the unwritten rationale of the argument I was trying to make.
At first glance, knowing how to spell a word and understanding a word should be perfectly orthogonal. How could it not be? Saying that it is not so would imply that civilizations without writing would have no thought or could not communicate through words, which is preposterous.
And yet, once we start delegating our thinking, our spelling and our writing to external black boxes, our grasp on those words and our grasp of those words becomes weaker. To the point that knowing how to spell a word might become a much bigger part, relatively, of our encounter with those words, as we are doing less conceptual thinking about those words and their meaning.
And therefore, I argue that, in a not too far-fetched extremum, understanding a word and knowing how to spell a word might not be fully orthogonal.
Well, I wouldn't say they're completely orthogonal, knowing how a word is spelled can sometimes give insight into the meaning of the word. I think they're mostly orthogonal though; it's fairly common for people to know what a word means without knowing how to spell it, and on the flip side there are people, like Scrabble players, who know how to spell a lot of words which they don't really know the meaning of. I've heard of one guy who is a champion French Scrabble player who can't actually understand French.
Bullshit! You cannot do second order reasoning with a set of facts or concepts that you have to look up first.
Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.
People will think you are a wizard if you read documentation and bother to remember it, because they're still busy asking Google or ChatGPT while you're happily coding without pausing
That's simply not true. I use mental arithmetic skill every day. It's irritating or funny when you come across someone who struggles with it, depending on the situation.
Obviously basic levels are needed but the ability to say multiply 4 digit numbers in your head is totally superfluous. Theres a parallel to software engineering there.
Being able to do basic math in your head is valuable just in terms of basic practicality (quickly calculating a tip or splitting a bill, doubling a recipe, reasoning about a budget...), but this is a poor analogy anyway because 3x2 is still 3x2 regardless of how you get there whereas creative work produced by software is worthless.
Mental math is essential for having strong numerical fluency, for estimation, and for reasoning about many systems. Those skills are incredibly useful for thinking critically about the world.
To a certain point. Basic arithmetic is important but the ability to calculate large square roots or multiply multi-digit numbers is not very relevant when you can trivially calculate them on your phone in seconds.
> Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.
Not to mention discernment and info literacy when you do need to go to the web to search for things. AI content slop has put everybody who built these skills on the back foot again, of course.
>Why should I learn to do X if I can just ask an LLM and it will do it better than me
This may eventually apply to all human labor.
I was thinking, even if they pass laws to mandate companies employ a certain fraction of human workers... it'll be like it already is now: they just let AI do most of the work anyway!
It’s all about critical thinking. The answer to your kid is that LLMs are a tool and until they run the entire economy there will still need to be people with critical thinking skills making decisions. Not every task at school helps hone critical thinking but many of them do.
>So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
Realistically it comes down to the idea that being an educated individual that knows how to think is important for being successful, and learning in school is the only way we know to optimize for that, even if it's likely not the most efficient way to do so.
The scope of what’s useful to know changes with tools, but having a bullshit detector requires actually knowing some things and being able to reason about the basics.
It’s not that LLM’s are particularly different it’s that people are less able to determine when they are messing up. A search engine fails and you notice, an LLM fails and your boss, customer, ect notices.
A large part was to preserve cultural knowledge, which is kind of like answering questions about it. What wisdom or knowledge does this entail. People do the same with religious texts today
The other part I imagine was largely entertainment, social and memory is a good skill to build.
It doesn’t seem that different from having to write a book report or something like that. Back in school, we also needed to memorize poems and songs to recite them - I quite hated it because my memory was never exactly great. Same as having to remember the vocabulary in a foreign language when learning it, though that might arguably be a bit more directly useful.
>When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Au contraire! It is quite wrong and was wrong then too. "Rote memorisation" is a slur for knowledge. Knowledge is still important.
Knowledge is the basis for skill. You can't have skill or understanding without knowledge because knowledge is illustrative (it gives examples) and provides context. You can know abstract facts like "addition is abelian" but that is meaningless if you can't add. You can't actually program if you don't know the building blocks of code. You can't write a C program if you have to look up the function signature of read(2) and write(3) every time you need to use them.
You don't always have access to Google, and its results have declined procipitously in quality in recent years. Someone relying on Google as their knowledge base will be kicking themselves today, I would claim.
It is a bit like saying you don't need to learn how to do arithmetic because of calculators. It misses that learning how to do arithmetic isn't just important for the sake of being able to do it, but for the sake of building a comfort with numbers, building numerical intuition, building a feeling for maths. And it will always be faster to simply know that 6x7 is 42 than to have to look it up. You use those basic arithmetical tasks 100 times every time you rearrange an equation. You have to be able to do them immediately. It is analogous.
Note that I have used illustrative examples. These are useful. Knowledge is more than knowing abstract facts like "knowledge is more than knowing abstract facts". It is about knowing concrete things too, which highlight the boundaries of those abstract facts and illustrate their cores. There is a reason law students learn specific cases and their facts and not just collections of contextless abstract principles of law.
>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers),
Writing legibly is important for many reasons. Note taking is important and often isn't and can't be done with a computer. It is also part of developing fine motor skills generally.
>spell very well (spell check keeps us professional),
Spell checking can't help with confusables like to/two/too, affect/effect, etc. and getting those wrong is much more embarrassing than writing "embarasing" or "parralel". Learning spelling is also crucial because spelling is an insight into etymology which is the basis of language.
>reading a map to get around (GPS), etc
Reliance on GPS means never building a proper spatial understanding. Many people that rely on GPS (or being driven around by others) never actually learn where anything is. They get lost as soon as they don't have a phone.
>but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection).
Memorising poetry is a different sort of thing--it is a value judgment not a matter of practicality--but it is valuable in itself. We have robbed generations of children of their heritage by not requiring them to learn their culture.
This is how we end up with people who cant write legibly, cant smell bad maths (on the news/articles/ads), cant change tires, have no orienteering or sense of direction and memories like swiss cheese. Trust the oracle son. /s
I think all of the above do one thing brilliantly, built self confidence.
Its easy to get bullshitted if what youre able to hold in your head is effectively nothing.
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change.
Homework being made easy by AI is a symptom of the real issues.
Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years.
Lecturers who refuse to upload lecture recordings or slides.
Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.
I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments.
If anything AI will lead to bigger differences in student learning.
Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.
Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.
And yet all the people who created all the advances in AI have extremely traditional, extremely good, fancy educations, and did absolutely bonkers amount of homework. The thing you are talking about is very aspirational.
There's some sad irony to that, making homework easier for future generations but those generations being worse off as a result on average. The lack of AI assistance was a forcing function to greater depth.
Outliers will still work hard and become even more valuable, AI won't affect them negatively.
I feel non outliers will be affected negatively on average in ability to learn/think.
With no confirming data, I feel those who got that fancy education would do so in any other institution. Just those fancy institutions draw in and filter for intelligent types, not teach them to be intelligent as it's practically a pre-requisite.
I don't see a future that doesn't involve some form of AR glasses and individual tuned learning. Forget teachers, you will just don your learning glasses and have an AI that walks you through assignments and learning everyday.
That is if learning-to-become-a-contributing-member-of-society doesn't become obsolete anyway.
> Flipped classroom is just having the students give lectures, instead of the teacher.
Not quite. Flipped classroom means more instruction outside of class time and less homework.
> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue
Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.
And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.
Flipped classroom sounds horrible to me. I never liked being given time to work on essays or big projects in class. I prefer working at home, where the environment is much more comfortable and I can use equipment the school doesn't have, where I can wait until I'm in the right mood to focus, where nobody is pestering me about the intermediary stages of my work, etc.
It also seems like a waste of having an expert around to be doing something you could do at home without them.
Exams should increasingly be written with the idea in mind that students can and will use AI. Open book exams are great. They're just harder to write.
I should add that upon reflection, I did have some really good "flipped classroom" experiences in college, especially in highly technical math and philosophy courses. But in those cases (a) homework was really vital, (b) significant work was never done in class, and (c) we never watched lectures at home. Instead, the activity at home (which did replace lectures) was reading textbooks (or papers) and doing homework. Then class time was like collective office hours.
Failure to do the homework made class time useless, the material was difficult, and the instructors were willing to give out failing grades. So doing the homework was vital even when it wasn't graded. Perhaps that can also work well here in the context of AI, at least for some subjects.
Thank you, it's amazing how people don't even try to understand what words mean before dismissing it. Flipped makes way more sense anyway since lectures aren't terribly interactive. Being able to pause/replay/skip around in lectures is underrated.
Except that students don't watch the videos. We have so much log data on this - most of them don't bother to actually watch the videos. They intend to, they think they will, but they don't.
As a university student currently taking a graduate course with a "flipped classroom" curriculum, I can confirm that many students in the class aren't watching the posted videos.
I myself am one of them, but I attribute that to the fact that this is a graduate version of an undergrad class I took two years ago (but have to take the grad version for degree requirements). Instead, I've been skimming the posted exercises and assessing myself which specific topics I need to brush up on.
If they can perform well without reviewing the material, that's a problem with either the performance measure or the material.
And not watching lectures is not the same as not reviewing the material. I generally prefer textbooks and working through proofs or practice problems by hand. If I listen to someone describe something technical I zone out too quickly. The only exception seems to be if I'm able to work ahead enough that the lecture feels like review. Then I'm able to engage.
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
A master blacksmith can shoe a horse an' all. Laser alignment is also a solved problem with a machine. Just because something can be done by hand does not mean it has any intrinsic value.
> But that struggling was ultimately necessary to really learn the concepts.
This is what isn't explained or understood properly (...I think) to students; on the surface you go to college/uni to learn a subject, but in reality, you "learn to learn". The output that you're asked to submit is just to prove that you can and have learned.
But you don't learn to learn by using AI tools. You may learn how to craft stuff that passes muster, gets you a decent grade and eventually a piece of paper, but you haven't learned to learn.
Of course, that isn't anything new, loads of people try and game the system, or just "do the work, get the paper". A box ticking exercise instead of something they actually want to learn.
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.
So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.
Is that necessarily a bad thing? It's mixed:
- You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level.
- For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.
It's like taking the college weed out classes and shifting those to people in the middle of their career.
Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.
Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past
Agreed, the struggle often leads us to poke and prod an issue from many angles until things finally click. It lets us think critically. In that journey you might've learned other related concepts which further solidifies your understanding.
But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.
Now does everything need an "a-ha" moment? No.
However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.
Yep. People love to cut down this argument by saying that a few decades ago, people said the same thing about calculators. But that was a problem too! People losing a large portion of their mental math faculty is definitely a problem. If mental math was required daily, we wouldn't see such obvious BS numbers in every kind of reporting(media/corporate/tech benchmarks) that people don't bat an eye at. How much the problem is _worth_ though, is what matters for adoption of these kinds of tech. Clearly, the problem above wasn't worth much. We now have to wait and see how much the "did not learn through cuts and scratches" problem is worth.
Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.
Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.
> I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.
But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.
It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.
I don't wholly disagree with this post, but I'd like to add a caveat, observing my own workflow with these tools.
I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).
For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."
Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.
This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.
Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.
I've been calling this out since the rise of ChatGPT:
"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."
I've had multiple situations where AI has helped me get to the solution because it has been unable to get there itself. But that I wouldn't have realised that solution otherwise. In one case, looking for a plot, it delivered many woeful options but one sparked an alternative thought that got me on track. In other cases trying to debug code, having it talk through the logic/flow and exhaust other fixes, I have managed to solve the problem despite not being experienced at all with that language.
The dangers I've found personally are more around how it eases busywork, so I'm more inclined to be distracted doing that as though it delivers actual progress.
I agree in principal - the process of problem solving is the important part.
However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.
If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.
You can offload the simpler things, but struggling with the simpler things is how you build the skills to handle the more complex ones that you can't hand off.
If the simpler thing in question is a task you've already mastered, then you're not losing much by asking an LLM to help you with it. If it's not trivial to you though, then you're missing an opportunity to learn.
The biology of the human brain will not change as a result of these LLMs. We are imperfect and will tend to take the easiest route in most cases. Having an "all powerful" tool that can offload the important work of figuring out tough problems seems like it will lead to a society less capable in solving complex problems.
The counter argument is that now you can skip boilerplate code and focus on the overall design and the few points that brainpower is really needed.
The amount of visualizations that i have made after chat gpt was released has increased exponentially. I loath looking the documentation again and again to make a slightly non standard graph. Now all of the friction is gone! Graphs and visuals are everywhere in my code!
> focus on [...] the few points that brainpower is really needed
The person you're responding to is talking about it from an educational perspective though. If your fundamentals aren't solid, you won't know that exponentially smoothed reservoir sampling backed by a splay tree is optimal for your problem, and ChatGPT has no clue either. Trying things, struggling, and failing is crucial to efficient learning.
Not to mention, you need enough brain power or expertise to know when it's bullshitting you. Just today it was telling me that a packed array was better than my proposed solution, confidently explaining why, and not once saying anything correct. No prompt changes could fix it (whether restarting or replying), and anyone who tried to use less brainpower there would be up a creek when their solution sucked.
Mind you, I use LLMs a lot, including for code-adjacent tasks and occasionally for code itself. It's a neat tool. It has its place though, and it must be used correctly.
Homework helps reinforce the material learned in class. It's already a problem where there is too much material to be fit into a single class period. Trying to cram in enough time for homework will only make that problem worse.
Keeping the curriculum fixed, there's already barely enough time to cover everything. Cutting the amount of lectures in half to make room for in-class homework time does not fix this fundamental problem.
* due to either learning/concentration issues
* the fact that most lecturers are boring, dull, and unengaging
* and oftentimes you can learn better from other sources
making lecture longer doesn't fix a single one of these issues. it just makes students learn even less.
Yeah, the concept of "productive struggle" is important to the education process and having a way to short circuit it seems like it leads to worse learning outcomes.
I am not sure all humans work the same way though. Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.
I felt that during my time in university. I absolutely loved reading and working through dense math text books but the moment there was a time constraint the struggle turned into chaos.
I think teachers also need to reconsider how they are measuring mastery in the subject. LLMs exist. There is no putting the cat back into the bag. If your 1980s way to measure a student's mastery of a subject can be fooled by an LLM, then how effective is that measurement in 2020+? Maybe we need to stop using essays as a way to tell if the student has learned the material.
Don't ask me what the solution is. Maybe your product does it. If I knew, I'd be making a fortune selling it to universities.
I don't think asking "what's wrong with my code" hurts the learning process. In fact, I would argue it helps it. I don't think you learn when you have reached your frustration point and you just want the dang assignment completed. But before reaching that point, if you had a tutor or assistant that you could ask, "hey, I'm just not seeing my mistake, do you have ideas" goes a long way to foster learning. ChatGPT, used in this way, can be extremely valuable and can definitely unlock learning in new ways which we probably even haven't seen yet.
That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.
No I think the struggle is essential. If you can just ask a tutor (real or electronic) what is wrong with your code, you stop thinking and become dependent on that. Learning to think your way through a roadblock that seems like a showstopper is huge.
It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.
If I were to go and try to bench 300lbs, I would absolutely need a spotter to rescue me. Taking on more weight than I can possibly achieve is a setup for failure.
Sure, I should probably practice benching 150lbs. That would be a good challenge for me and I would benefit from that experience. But 300lbs would crush me.
Sadly, ChatGPT is like a spotter that takes over at the smallest hint of struggle. Yes, you are not going to get crushed, but you won't get any workout done either.
You really want start with a smaller weight, and increment it in steps as you progress. You know, like a class or something. And when you do those exercises, you really want to be lifting those weights yourself, and not rely on spotter for every rep.
We're stretching the metaphor here. I know, kind of obnoxious.
If I have accidentally lifted too much weight, I want a spotter that can immediately give me relief. But yes, you're right. If I am always getting a spot, then I'm not really lifting my own weight and indeed not making any gains.
I think the question was, "I'm stuck on this code, and I don't see an obvious answer." Now the lazy student is going to ask for help prematurely. But that doesn't preclude ChatGPT's use to only the lazy.
If I'm stuck and I'm asking for insight, I think it's brilliant that ChatGPT can act as a spotter and give some immediate relief. No different than asking for a tutor. Yes maybe ChatGPT gives away the whole answer when all you needed is a hint. That's the difference between pure human intelligence and just the glorified search engine that is AI.
And quite probably, this could be a really awesome way in which AI learning models could evolve in the context of education. Maybe ChatGPT doesn't give you the whole answer, instead it can just give you the hint you need to consider moving forward.
Microsoft put out a demo/video of a grad student using Copilot in very much this way. Basically the student was asking questions and Copilot was giving answers that were in the frame of "did you think about this approach?" or "consider that there are other possibilities", etc. Granted, mostly a marketing vibe from MSFT, but this really demonstrates a vision for using LLMs as a means for true learning, not just spoiling the answer.
Sure, this is possible. Also Chegg is an "innovative learning tool", not a way to cheat.
I agree that it's not that different than asking a tutor though, assuming it's a personal tutor whom are you paying so they won't ever refuse to answer. I've never had access to someone like that, but I can totally believe that if I did, I would graduate without learning much.
Back to ChatGPT: during my college times I've had plenty of times when I was really struggling, I remember feeling extremely frustrated when my projects would not work, and spending long hours in the labs. I was able to solve this myself, without any outside help, be it tutors or AI - and I think this was the most important part of my education, probably at least as important as all the lectures I went to. As they say, "no pain, no gain".
That said, our discussion is kinda useless - it's not like we can convince college students to stop using AI. The bad colleges will pass everyone (this already happens), the good colleges will adapt (probably by assigning less weight to homework and more weight to in-class exams). Students will have another reason to fail the class: in additional to classic "I spend whole semester partying/playing computer games instead of studying", they would also say "I never opened books and had ChatGPT do all assignments for me, why am I failing tests?"
Students do something akin to vibe coding I guess. It may seem impressive at first glance but if anything breaks you are so, so lost. Maybe that’s it, break the student’s code, see how they fix it. The vibe coding student is easily separate from the real one (of course this real coder can also use AI, just not yoloing it).
I guess you can apply similar mechanics to reports. Some deeper questions and you will know if the report was self written or if an AI did it.
>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.
It doesn't matter who coherent the output is - the students will paste it anyway, then fail the assignment (and you need to deal with grading it) and then complain to parents and school board that you're incompetent because you're failing the majority of the class.
Your post is based in a misguided idea that students actually care about some basic quality of their work.
Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".
Working code in a few seconds.
I'm very glad I didn't have access to anything like that when I was doing my CS degree.
Yeah, and forget about giving skeleton code to students they should fill in; using an AI can quite frequently completely ace a typical undergraduate level assignment. I actually feel bad for people teaching programming courses, as the only real assessment one can now do is in-class testing without computers, but that is a strange way to test students’ ability to write and develop code to solve certain classes of problems…
Hopefully someone is thinking about adapting the assessments. Asking questions that focus on a big picture understanding instead of details on those in-class tests.
Yeah. On the other hand, "implement boruvkas MST algorithm in cuda such that only the while(numcomponents > 1) loop runs on the CPU, and everything else runs in the gpu. Memcpy everything onto the gpu first and only transfer back the count each iteration/keep it in pinned memory"
It never gets it right, even after many reattempts in cursor. And even if it gets it right, it doesn't do the parallelization effectively enough - it's a hard problem to parallelize.
As I said, I'm not a student, so I don't have access to a homework assignment to paste in. Ironically I have pretty much everything I ever submitted for my undergrad, but it seems like I absolutely never archived the assignments for some reason.
since late 2024/early 2025 it now is the case, especially with a reasoning model like Sonnet 3.7, DeepSeek-r1, o3, Gemini 2.5, etc., and especially if you upload the textbook, slides, etc alongside the homework to be cheated on.
most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.
for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.
but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.
I spent much of the past year at public libraries, and I heard the word ChatGPT approximately once per minute, in surround sound. Always from young people, and usually in a hushed tone...
In one way I'm glad I learned to code before LLM:s. It would be so hard to push through the learning now when you are just a click away from buildning the app with AI...
There is no graded homework, the coursework is there only as a guide and practice for the exams.
So you can absolutely use LLMs to help you with the exercises or to help understand something, however if you blindly get answers you will only be fooling yourself as you won't be able to pass the exams.
That’s how most schooling has already been in a lot of South and East Asia. If you don’t do your homework, you get punished in other ways, but it doesn’t have any impact on the overall grade, the grade solely depends on the final exam.
Currently in university and my experience is that it heavily depends on the module. For a lot your statement's probably accurate, however for others it really isn't. For example we have a microprocessors module which is programming for an rp2040 in c, but also manually setting up interrupt handlers etc in assembly. All of the LLM's are completely useless for it, they tell you that the rp2040 works in ways it just doesn't and are actively unhelpful with the misinformation. The only students who can do well in that module are the ones that understand the material well and go to the datasheet and documentation instead of an LLM.
I'm more interested in memory and knowledge retention in general and how AI can assist. How many times have you heard from people that they are doing rote memorization and will "data dump" test information once a course is over. These tools are less to blame than the motivators and systems that are suppose to be engaging students in real learning and the benefits of a struggle.
Another problem is there is so much in technology, I just can't remember everything after years of exposure to so many spaces. Not being able to recall information you used to know is frustrating and having AI to remind you of details is very useful. I see it as an amplifying tool, not a replacement for knowledge. I'm sure there are some prolific note taking memory tricksters out there but I'm not one of them.
I frequently forget information over time and it's nice to have a tool to remind me of how UDP, RTP, and SIP routing work when I haven't been in the comm or network space for a while.
My CS undergrad school used to let students look up documentation during coding exams. Most courses had a 3-5 hour coding challenge where you had to make substantial changes to a course project you had developed. I think this could also be the right response to LLMs. Let students use whatever they want to use, and test true skills and understanding.
FWIW, exams testing rote learning without the ability to look up things would have been much easier. It was really stressful to sit down and make major changes to your project to satisfy new unit tests, which often targeted edge cases and big O complexity to crash your code.
Yes, it led to well-rounded learning. But we had too many courses and, overall, I think it was too much. All CS courses had a theoretical exam, some project-based learning, and some coding exam to prevent cheating in the project-based learning part.
I don't get this reasoning. Without LLMs I would learn how to write sub-optimal code that is somewhat functional. With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster. On top of that it always makes dumb mistakes which forces you to actually understand what it's spitting out to get it to work properly. Again: that helps with learning.
The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.
> With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster.
No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.
So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.
This makes for a good sound bite but it's just not true. The use case of "show me what is a customary solution to <problem>" plays exactly into LLMs strength as a funny kind of search engine. I used to (and still do) search public code for this use case to get a sense of the style and idioms common in a new language/library
and the plausible set of tokens is doing exactly that.
It’s more like looking up the solution to the math problem you’re supposed to solve on your own. It can be helpful in some situations, but in general you don’t learn the problem-solving skills if you don’t do it yourself.
Exactly. For vast majority of students myself included just looking at ready solution is actually very poor way to study. And LLMs are exactly this. Ready solution generators. Doing with things like math and programming is learning.
And same goes for art. You do not become master in art by looking at art or even someone drawing...
> I think the issue is that it's so tempting to lean on AI.
This is not the root cause, it's a side effect.
Student's cheat because of anxiety. Anxiety is driven by grades, because grades affect failure. To detect cheating is solving the wrong problem. If most of the grades did not directly affect failure, student's wouldn't be pressured to cheat. Evaluation and grades have two purposes:
1. Determine grade of qualification i.e result of education (sometimes called "summative")
2. Identify weaknesses to aid in and optimise learning (sometimes called "formative")
The problem arises when these two are conflated, either by combining them and littering them throughout a course, or when there is an imbalance in the ratio between them i.e too much of #1. Then the pressure to cheat arises, the measure becomes the target, and focus on learning is compromised. This is not a new problem, student's already waste time trying to undermine grades through suboptimal learning activities like "cramming".
The funny thing is that everyone already knows how to solve cheating: controlled examination, which is practical to implement for #1, so long as you don't have a disruptive number of exams filling that purpose. This is even done in sci-fi, Spok takes a "memory test" in 2286 on Vulkan as a kind of "final exam" in a controlled environment with challenges from computers - it's still using a combination of proxy knowledge based questions and puzzles, but it doesn't matter, it's a controlled environment.
What's needed is a separation and balance between summative and formative grading, then preventing cheating is almost easy, and student's can focus on learning... cheating at tests throughout the course would actually have a negative affect on their final grade, because they would be undermining their own learning by breaking their own REPL.
LLMs have only increased the pressure, and this may end up being a positive thing for education.
>I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts.
This is entirely your opinion. We don't know how the brain learns nor do we know if intelligence can be "taught"
I think this is a structural issue. Universities right now are trying to justify their existence - universities of the past used to be sites of innovation.
Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!
Incorrect. Fundamentals must be taught in order to provide the context for the more challenging open-ended activities. Memorization is the base of knowledge, a starting point. Cheating (whether through an LLM or hiring someone or whatever) skips the journey. You can't just take them through the exciting routes, sometimes they have to go through the boring tedious repetitive stuff because that's how human brains learn. Learning is, literally, a stressful process on the brain. Students try to avoid it, but that's not good for them. At least in the introductory core classes.
I guess I should have phrased it differently - what I meant was just stop testing the tedious stuff, make it clear to students that learning the fundamentals is expected. Then examine them on hard exploratory problems which require the fundamentals.
> Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does.
LLMs can't produce intellectual rigour. They get fine details wrong every time. So indeed using ChatGPT for doing your reasoning for you produces inferior results. By normalising non-rigorous yet correct sounding answers, we drive down the expectations.
To take a concrete example, if you tell a student to implement memcpy with chatgpt, and it will just give an answer which uses uint64 copying. The student has not thought from first principles (copy byte by byte? Improve performance? How to handle alignment?). This lack of insight in return to immediate gratification will bite later.
It's maybe not problem for non-STEM fields where this kind of rigor and insight is not required to excel. But in STEM fields, we write programs and prove theorems for insight. And that insight and the process of obtaining it is gone with AI.
You claim using AI tools doesn't dumb you down, but it very well could and is. Take the calculator for example, I'm overly dependent on it. I'm slower to perform arithmetic than I would have been without it. But knowing how to use one allows me to do more complex math more quickly. So I'm "dumber" in one way and "smarter" in others. AI could be the same... except our education system doesn't seem ready for it. We still learn arithmetic, even if we later rely in tools to do it. Right now teachers don't know how to teach so that AI doesn't trivialize things.
You need to know how to do things so you know when the AI is lying to you.
I agree that you should learn the fundamentals before taking shortcuts. I just don't view it as the universities' job to repeatedly remind their students of this, that's elementary/high school style. In universitiy, just give them hard problems requiring fundamental knowledge and cross-checking capabilities but don't restrict their tools.
I TA'd for the fundamentals of computer science I in college. In addition to being a great class for freshman, teaching it every year really did help keep me sharp.
High schools are a long way off from that level of education. I took AP CS in highschool and it was a joke by comparison. Of course YMMV. The best highschool CS course might be better than the worst university level offerings. We would always have know it all students who learned Java in high school. They either appreciated the new perspective on the fundamentals and did well, or they blew off the class and failed when it got harder.
We could keep the same teaching offerings, my main gripe is with the assignments/examinations. It just feels wrong to complain about students using AI while at the same time continuing to hand out tasks that are trivial to solve using AI.
I also worked for the faculty for the better part of my university studies, and I know that ultimately changing the status quo is most likely impractical. There are not enough resources to continuously grade open-ended assignments for so many students and they probably need the pedagogical pressure to learn fundamentals. Still makes me a bit bitter from time to time.
Agreed, the only thing that is certain is that they are cheating themselves.
While it can be useful to use LLMs as a tutor if you're stuck. The moment that you use it to provide a solution, you stop learning and the tool becomes a required stepping stone.
here is an idea, curious what others think of this:
split the entire coursework into two parts:
part 1 - students are prohibited from using AI. Have the exams be on physical papers than digital ones requiring use of laptop/computer. I know this adds burden on corrections and evaluations of these answers, but I think this provides a raw answer to someone's understanding of concepts being taught in the class.
part2 - students are allowed, and even encouraged to use LLMs. And they are evaluated based on the overall quality of the answer, keeping in mind that a non zero portion of this was generated using an LLM. Here the credit should be given to the factual correctness of the answer (and if the student is capable of verifying the LLM output).
Have the final grade be some form of weighted average of a student's scores in these 2 parts.
note: This is a raw thought that just occurred to me while reading this thread, and I have not had the chance to ruminate on it.
I once had an algorithms professor who would give us written home assignments and then on the day of submission take a quiz with identical questions. A significant portion of the class did poorly on these quizes despite scoring good on the assignment.
I can't even imagine how learning is impacted by the (ab)use of AI.
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
From what I could observe, the latter is endemic amongst high school students. And don't kid yourself. For many it is just a step up from copy/pasting the first Google result.
They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.
THe question is would those students have done any better or worse if there hadn't been LLM for them to "copy" off?
In other words, is the school certificaftion meant to distinguish those who genuinely learnt, or was it merely meant to signal (and thus, those who used to copy pre-llm are going to do the same, and thus reach the same level of certification regardless of whether they learnt or not)?
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
Indeed. I called out the second-top category, but you could look at the top category as well:
> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.
Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.
And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.
I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
> Bloom's taxonomy is a framework for categorizing educational goals, developed by a committee of educators chaired by Benjamin Bloom in 1956. ... In 2001, this taxonomy was revised, renaming and reordering the levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. This domain focuses on intellectual skills and the development of critical thinking and problem-solving abilities. - Wikipedia
This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.
Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.
Here is one key take-away, phrased as a question: when a student uses an LLM for "creation", are underlying aspects (understanding, application, analysis, and evaluation) part of the learning process?
> Students primarily use AI systems for creating (using information to learn something new)
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions
- Provide direct answers to English language test questions
- Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
How does your product prevent a person from simply retyping something that ChatGPT wrote?
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
It may be possible that with enough data from the two categories (copied from ChatGPT and not), your keystroke dynamics will differ. This is an open question that my co-founder and I are running experiments on currently.
So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).
Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.
We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.
It certainly would be! I think for many students though, there's something lost there. I was a student who got a lot more value out of my take-home work than I did out of my in-class work. I don't think that I ever would have taken the interest in writing that I did if it wasn't such a solitary, meditative thing for me.
>I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable
It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
(And I am aware of the irony in failing to communicate when mentioning that studying writing is important to be good at communication.)
Maybe I should have also cited this part:
> writing as a proxy for what actually matters, which is thinking.
In my opinion, writing is important not (only) as a proxy for thinking, but as a direct form of communicating ideas. (Also applies to other forms of communication though.)
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
I'm not settled on transcribed conversation being an adequate substitute for writing, but maybe it's better than nothing.
There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.
Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.
How can I, as a student, avoid hindering my learning with language models?
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.
I’d also have sessions / days where I don’t use AI at all.
Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.
I definitely catch myself reaching for the LLM because thinking is too much effort. It's quite a scary moment for someone who prides themself on their ability to think.
It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.
"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.
I don't think the pain of losing points is a good learning incentive, powerful sure but not effective.
You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.
IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted
As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses.
For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?
Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.
> can ultimately understand the underlying reasoning
This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.
Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.
I believe conversation is a one of the best ways to really learn a topic, so long as it is used deliberately.
My folk theory of education is that there is a sequence you need to complete to truly master a topic.
Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.
Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.
Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.
This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).
Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.
I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.
The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.
Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.
Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
> Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills.
No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.
Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.
CS/CE undergrad here who entered university right when ChatGPT hit. Things are bad at my large state school.
People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.
> "Oh, the foo field in that struct should be signed instead of unsigned."
< "Struct?"
> "Yeah, the type definition of Bar? It's right there."
> I think it downplays the incidence of students using Claude as an alternative to building foundational skills
I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge
Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.
Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.
The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers
Yes, AI tools have shifted the education paradigm and cognition requirements. This is a 'threat' to universities, but I would also argue that it's an opportunity for universities to reinvent the experience of further education.
Yea, the solution here is to embrace the reality that these tools exist and will be used regardless of what the university wants, and use it as an opportunity to level up the education and experience.
The clueless educational institutions will simply try to fight it, like they tried to fight copy/pasting from Google and like they probably fought calculators.
They didn’t “fight” copy and pasting from Google - they called it what it is, plagiarism, and they expel hundreds of students for it.
Universities aren’t here to hold your hand and give you a piece of paper. They’re here to build skills. If you cheat, you don’t build the skills, so the piece of paper is now worthless.
The only reason degrees mean anything is because the institutions behind them work very hard to make sure the people earning them know what they’re doing.
If you can’t research a write an essay and you have to “copy/paste” from google, the reality is you’re probably a shit writer and a shit researcher. So if we just give those people degrees anyway, then suddenly so-called professionals are going to flounder. And that’s not good for them, or for me, or for society as a whole.
That’s the key here that people are missing. Yeah cheating is fun and yeah it’s the future. But if you hire a programmer, and they can’t program, that’s bad!
And before I hear something about “leveling up” skills. Nuh-uh, it doesn’t work that way. Skills are built on each other. Shortcuts don’t build skills, they do the opposite.
Using chat GPT to pass your Java class isn’t going to help you become a master C++ day trading programmer. Quite the opposite! How can you expect to become that when you don’t know what the fuck a data type is?
We use calculators, sure. We use Google, sure. But we teach addition first. Using the most overpowered tool for block number 1 in the 500 foot tall jenga tower is setting yourself up for failure.
I think most people miss the bigger picture on the impact of AI on the learning process, especially in engineering disciplines.
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
I've used AI for one of the best studying experiences I've had in a long time:
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
My family member is a third year med student (US) near the top of their class and makes heavy heavy use of Anki (which is crowdsourced in the Med School community to create very very comprehensive decks).
I'll bite. Would you care to back that up somehow? Or at least elaborate.
Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others
Give me another day and I'll respond in full; but my thesis is taken from the book "Make It Stick: The Science of Successful Learning" which was written by a group of neuro- and cognitive scientists on what are the most effective ways to learn.
The one chapter that stood out very clear, especially in a college setting, was how inefficient flash cards were compare to other methods like taking a practice exam instead.
There are a lot of executive summaries on the book and I've posted comments in support of their science backed methods as well.
It's also something I'm personally testing myself this year regarding programming since I've had great success doing their methods in other facets of my life.
I've always viewed them as a good option if you just have a set of facts you need to lodge into your brain (especially with spaced repetition), not so good if you need to develop understanding.
I used flashcards with my daughter since she was 1.5 years old. she is 12 now and religiously uses flashcards for all learning. and I’d size her up against anyone using any other technique for learning whatsoever
My wife works at a European engineering university with students from all over the world and is often a thesis advisor for Masters students. She says that up until 2 years ago a lot of her time was spent on just proofreading and correcting the student's English. Now everybody writes 'perfect' English and all sound exactly the same in an obvious ChatGPT sort way. It is also obvious that they use AI when she asks them why they used a certain 'big' word or complicated sentence structure, and they just stare blankly and cannot answer.
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
I agree with you, but I hope schools also take the opportunity to reflect on what they teach and how. I used to think I hated writing, but it turns out I just hated English class. (I got a STEM degree because I hated English class so much, so maybe I have my high school English teacher to thank for it.)
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
Most of us here took their education before AI. Students trying to avoid having to do work is a constant and as old as the notion of schools is. Changing/improving the tools just means teachers have to escalate the counter measures. For example by raising the ambition level in terms of quality and amount of work expected.
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
> Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
what's the point of the teacher then? Courses could entirely be taught via LLM in this case!
A student's willingness to learn is orthogonal to the availability of cheating devices. If a student is willing, they will know when to leverage the LLM for tutoring, and when to practise without it.
A student who's unwilling cannot be stopped from cheating via LLM now-a-days. Is it worth expending resources to try prevent it? The only reason i can think of is to ensure the validity of school certifications, which is growing increasingly worthless anyway.
Coaching the student on their learning journey, kicking their ass when they are failing, providing independent testing/certification of their skills, answering questions they have, giving lectures, etc.
But you are right, you don't have to wait for a teacher to tell you stuff if you want to self educate yourself. The flip side is that a lot of people lack the discipline to teach themselves anything. Which is why going to school & universities is a good idea for many.
And I would expect good students that are naturally curious to be using LLM based tools a lot to satisfy their curiosity. And I would hope good teachers would encourage that instead of just trying to fit students into some straight jacket based on whatever the bare minimum standards say they should know, which of course is what a lot of teaching boils down to.
This has been observation about the internet. Growing up in a small town without access to advanced classes, having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less as a result of unlimited access to information would be depressing if it did not result in my own personal gain.
I would say a big difference of the Internet around 2000 and the internet now is that most people shared information in good faith back then, which is not the case anymore. Maybe back then people were just as uncritical of information, but now we really see the impact of people being not critical.
> having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less
when wikipedia was initially made, many schools/teachers explicitly denied wikipedia as a source for citing in essays. And obviously, plenty of kids just plagerized wikipedia articles for their essay topics (and was easily discovered at the time).
With the advent of LLM, this sort of pseudo-learning is going to be more and more common. The unsupervised tests (like online tests, or take home assignments) cannot prevent cheating. The end result is that students would pass, but without _actually_ learning the material at all.
I personally think that perhaps the issue is not with the students, but with the student's requirement for certification post-school. Those who are genuinely interested would be able to leverage LLM to the maximum for their benefit, not just to cheat a test.
No one seems to be talking about the fact that we need to change the definition of cheating.
People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.
If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.
For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".
I believe we need to practice critical thinking through actual effort. Doing arithmetic by hand and working through problems ourselves builds intuition in ways that shortcuts can't. I'm grateful I grew up without LLMs, as the struggle to organize and express my thoughts on paper developed mental muscles I still rely on today. Some perspiration is necessary for genuine learning—the difficulty is actually part of the value.
Critical thinking is not a generic/standalone skill that you can practise targetedly. As in, critical thinking doesn't translate across knowledge domains. To think critically you need extensive knowledge of the domain in question; that's one reason why memorizing facts will always remain necessary, despite search engines and LLMs.
At best what you can learn specifically regarding critical thinking are some rules of thumb such as "compare at least three sources" and "ask yourself who benefits".
> It's like expecting them to do arithmetic by hand. No one does.
But those who traditionally learnt arithmetics have had this training, which _enables_ higher order thinking.
Being reliant on AI to do this means they would not have had that same level of training. It could prevent them from being able to synthesize new patterns or recognize them (and so if the AI also cannot do the same, you get stagnation).
I suspect schools spend a lot less time on arithmetic than they used to, however.
You used to _actually_ need to do the arithmetic, now you just need to understand when a calculator is not giving you what you expected. (Not that this is being taught either, lol)
You can get to the higher order thinking sooner than if you spent years grinding multiplication tables.
> you just need to understand when a calculator is not giving you what you expected
How do you do that if you can't do arithmetic by hand though? At most, when working with integers, you can count digits to check if the order of magnitude is correct.
You can do arithmetic by hand without being fast or accurate. It's still useful to check that calculations are correct, it's just slow for the ancient use of tallying up a bill.
That's such an irresponsible take. If you don't know how to program, you can't even begin to judge the output of whatever model. You'll be the idiotic manager that tells the IT department to solve some problem, and it has to be done in two weeks. No idea if that's reasonable or feasible. And when you can't do that, you certainly can't design a larger system.
What's your next rant: know nead too learn two reed and right ennui moor? Because AI can do that for you? No need to think? "So, you turned 6 today? That over there is your place at the assembly line. Get to know it well, because you'll be there the rest of your life."
> For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class?
That's already how schools organize their curriculum.
I mean, arithmetic is the same way, right? Nobody should do the arithmetic by hand, as you say. Kindergarten teachers really ought to just hand their kids calculators, tell them they should push these buttons like this, and write down the answers. No need to teach them how to do routine arithmetics like 3+4 when a calculator can do it for them.
If kids don't go through the struggle of understanding arithmetic, higher math will be very very difficult. Just because you can use a calculator, doesn't mean that's the best way to learn. Likewise for using LLMs to program.
I have no anecdata to counter your thesis. I do agree that immersion in the doing of a thing is the best way to learn. I am not fully convinced that doing a lot of arithmetic hand calculation precludes learning the science of patterns that is mathematics. They should still be doing something mathematical but why not go right into using a calculator. I have no experience as an educator and I bet it's hard to get good data on this topic of debate. I could be very wrong.
I'm not an educator but I know from teaching my own children that you don't introduce math using symbols and abstract representations. You grab 5 of some small object and show them how a pile of 2 objects combined with a pile of 3 objects creates a pile of 5 objects.
Remember, language is a natural skill all humans have. So is counting (a skill that may not even be unique to humans).
However writing is an artifical technology invented by humans. Writing is not natural in the sense that language itself is. There is no part of brain we're born with that comes ready to write. Instead, when we learn to write other parts of our brain that are associated with language and hearing and vision are co-opted into the "writing and reading parts".
Teaching kids math using writing and symbolism is unnatural and often an abstraction too far for them (initially). Introducing written math is easier and makes more sense once kids are also learning to read and write - their brains are being rewired by that process. However even an toddler can look at a pile of 3 objects and a pile of 5 objects and know which one is more, even if they can't explicitly count them using language - let alone read and write.
There's a wealth of research on how children learn to do math, and one of the most crucial things is having experiences manipulating numbers directly. Children don't understand how the symbols we use map to different numbers and the operations themselves take time to learn. If you just have them use a black-box to generate answers, they won't understand how the underlying procedures conceptually work and so they'll be super limited in their mathematical ability later on.
Can you explain further why you think nobody has tried teaching first graders math exclusively using calculator in the 30 years they've been dirt cheap?
That's after all the implication from your assessment that there would be no good data.
I'm looking forward to the next installment on this subject from Anthropic, namely "How University Teachers Use Claude".
How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?
Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.
I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.
Alternatively, let's throw out our outmoded ideas and all get excited for an AI-based future in which professors let AI grade the essays student generate with AI.
Just think of the time everybody will save! Instead of wasting effort learning or teaching, we'll be free to spend our time doing... uh... something! Generative AI will clearly be a real 10x or even 100x multiplier! We'll spiral into cultural and intellectual oblivion so much faster than we ever thought possible!
I loved asking questions as a kid. To the point of annoying adults. I would have loved to sit and ask these AI questions about all kinds of interests when I was young.
It says STEM undergrad students are the primary beneficiaries of LLMs but Wolfram Alpha was already able to do the lion's share of most undergrad STEM homework 15 years ago.
If I would start college today I would use all the models and chat assistants that are easily available. I would use Google and YouTube to learn concepts deeper. I would ask for subjects from previous years and talk with people from same and higher years.
When I was in college students were paying for homeworks solved by other students, teachers and so on.
In the article "Evaluating" is marked at 5.5% where creating is 39.8%. Students are still evaluating the answers.
My point is that just got easier to go in any direction. The distribution range is wider, is the mean changing?
This topic is also interesting to me because I have small children.
Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.
There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.
I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.
When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.
So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.
I am currently in CS, Year 2. I'd argue that ~99% of all students use LLMs for cheating. The way I know this is that when our professor walked out during an exam, I looked around the room and saw everyone on ChatGPT. I have a feeling many of my peers don't really understand what LLMs are, beyond "question in, answer out".
While recognizing the material downsides of education in the time of AI, I envy serious students who now have access to these systems. As an engineering undergrad at a research-focused institution a couple decades ago, I had a few classes taught by professors who appeared entirely uninterested in whether their students were comprehending the material or not. I would have given a lot for the ability to ask a modern frontier LLM to explain a concept to me in a different way when the original breezed-through, "obvious" approach didn't connect with me.
I am surprised that business students are relatively low adopters: LLMs seem perfect for helping with presentations, etc, and business students are stereotypically practical-minded rather than motivated by love of the subject.
Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.
They use an LLM to summarize the chats, which IMO makes the results as fundamentally unreliable as LLMs are. Maybe for an aggregate statistical analysis (for the purpose of...vibe-based product direction?) this is good enough, but if you were to use this to try to inform impactful policies, caveat emptor.
For example, it's fashionable in math education these days to ask students to generate problems as a different mode of probing understanding of a topic. And from the article: "We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, ..." That last part smells fishy, and even if you saw a prompt like "design a practice question..." you wouldn't be able to know if they were cheating, given the context mentioned above.
In my day, like (no exaggeration) 50 years ago, we were having the exact same conversation, but with pocket calculators playing the role of AI. Plus ca change...
Well, a big difference is that arithmetic is something you learn in elementary school, whereas LLMs can do a large fraction of undergraduate-level university assignments.
I think the point is that the situation is probably worse than what Anthropic is presenting here. So if the conclusions are just damaging, the reality must be truly damning.
To have the reputation as an AI company that really cares about education and the responsible integration of AI into education is a pretty valuable goal. They are now ahead of OpenAI in this respect.
The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.
English is not my first language. To me, 'AD' was a shorter way to say 'advertisement' (a really hard word to remember how to spell btw) Is that wrong?
I get what you're saying now. I can write it in lowercase, right? It's just that I see people writing it that way, so I end up repeating their behavior without even realizing it.
As someone teaching at the university level, the goals of teaching are (in that order):
1. Get people interested in my topics and removing fears and/or preconceived notions about whether it is something for them or not
2. Teach students general principles and the ability to go deeper themselves when and if it is needed
3. Giving them the ability to apply the learned principles/material in situations they encounter
I think removing fear and sparking interest is a precondition for the other two. And if people are interested they want to understand it and then they use AI to answer questions they have instead of blindly letting it do the work.
And even before AI you would have students who thought they did themselves favours by going a learn-and-forget route or cheating. AI jusr makes it a little easier to do just that. But in any pressure situation, like a written assignment under supervision it will come to light anyways, whether someone knows their shit or not.
Now I have the luck that the topics I teach (electronics and media technology) are very applied anyways, so AI does not have a big impact as of now. Not being able to understand things isn't really an option when you have to use a mixing desk in a venue with a hundred people or when you have to set up a tripod without wrecking the 6000€ camera on top.
But I generally teach people who are in it for the interest and not for some prestige that comes with having a BA/MA. I can imagine this is quite different in other fields where people are in it for the money or the prestige.
I'd be very curious to know how these results would differ across other LLM providers and education levels.
My wife is a secondary school teacher (UK), teaching KS3, GCSE, and A level. She says that most of her students are using Snapchat LLM as their first port of call for stuff these days. Many of the students also talk about ChatGPT but she had never heard of Claude or Anthropic until I shared this article with her today.
My guess would be that usage is significantly higher across all subject, and that direct creation is also higher. I'd also assume that these habits will be carried with them into University over the coming years.
It would be great to see this as an annual piece, a bit like the StackOverflow survey. I can't imagine we'll ever see similar research being written up by companies like Snapchat but it would be fascinating to compare it.
I'm an undergrad at a T10 college. Walking through our library, I often notice about 30% of students have ChatGPT or Claude open on their screens.
In my circle, I can't name a single person who doesn't heavily use these tools for assignments.
What's fascinating, though, is that the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle of solving technical problems themselves. Another interesting effect: many of these same students admit they now have more time for programming and learning they “care about” because they've automated their humanities, social sciences, and other major requirements using LLMs. They don't care enough about those non-major courses to worry about the learning they're sacrificing.
Another obvious downside of the idiosyncratically American system that forces university students to take irrelevant classes to make up for the total lack of rigorous academic high school education.
> the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle
I think they are in the right path here
> they've automated their humanities, social sciences, and other major requirements using LLMs.
This worries me. If they struggle with these topics but don't see the value in that struggle, that is their prerogative to decide for themselves what is important to them. But I think more technically apt people who have low verbal reasoning skills, little knowledge of history, sociology, psychology, etc, is a net positive for society. So many of the problems with the current tech industry is the tendency to think everything is just a technical problem and being oblivious to the human aspects.
I use Claude as a Learning Assistant in my classes in Physics. I tell it the students are in an active learning environment and to respond to student questions by posing questions. I tell it to not give direct answers, but that it is okay to tell them when they are on the right track. I tell it that being socratic with questions that help focus the students on the fundamental questions is the best tack to take. It works reasonably well. I often use it in class to focus their thinking before they get together in groups to discuss problem solving strategies. In testing I have been unable to "jail break" Claude when I ask it to be a Learning Assistant, unlike ChatGPT which I was able to "jail break" and give students answers. A colleague said that what I am doing is like using AI to be an interactive way to get students to answer conceptual questions at the end of chapters, which they rarely do on their own. I have been happy using AI in this role.
I feel CS students, and to a lesser degree STEM in general, will always be more early adopters of advancements in computer technology.
They were the first to adopt digital wordprocessing, presentations, printing and now generative AI even though in essence all of these would have been disproportionately more hand in glove for the humanities on a purely functional level.
It's just a matter of comfortability with and interest in technology.
I’m about to graduate from a top business school with my MBA and it’s been wild seeing AI evolve over the last 2 years.
GPT3 was pretty ass - yet some students would look you dead in the eyes with that slop and claim it as their own. Fast forward to last year when I complimented a student on his writing and he had to stop me - “bro this is all just AI.”
I’ve used AI to help build out frameworks for essays and suggest possible topics and it’s been quite helpful. I prefer to do the writing myself because the AIs tend to take very bland positions. The AIs are also great at helping me flesh out my writing. I ask “does this make sense” and it tells me patiently where my writing falls off the wagon.
AI is a game changer in a big way. Total paradigm shift. It can now take you 90% of the way with 10% of the effort. Whether this is good or bad is beyond my pay grade. What I can say is that if you are not leveraging AI, you will fall behind those that are.
I'm curious why people think business is so underrepresented as a user group, especially since "analyzing" 30% of the Bloom Taxonomy results. My dual theories are:
- LLMs are good enough to zero or few-shot most business questions and assignments, so n.questions is low VS other tasks like writing a codebase.
- Form factor (biased here); maybe threads-only aren't best for business analysis?
So they can look very deeply into what their users do and have a lot of tooling to facilitate this.
They will likely sell some version of this "Clio" to managers, to make it easier for them to accept this very intimate insight into the businesses they manage.
I want to take an exception to the term cheat. Because it is only cheating the student in the end. I didn’t learn my times tables in elementary school. Sure, I can work out the answer to any multiplication problem, but that’s the point, I have to work it out. This slows me down compared to others who learned the patterns, where they can do the multiplication in their fast automatic cognitive system and possibly the downstream processing for what they need the multiplication for. I have to think through the problem. I only cheated myself.
The problem is, everybody does that, and it lowers the bar. From a societal perspective, we will have a set of people who are less prepared for their jobs, which will cost companies, and the economy at large, and so me and you. This will be a real problem for as long as AIs can't do the actual job but only the college easy version.
As a society, we should mandate universities to calculate the full score of a course based solely on oral or pen and paper exams, or computer exams only under strict supervision (eg share screen surveillance). Anything less is too easy to cheat.
And most crucially let go of this need to promote at least X% of the students: those who pass the bar should get the piece of paper that says they passed the bar, the others should not.
an interesting area potentially missed (though acknowledged as out of scope) is how students might use LLMs for tasks related to early adulthood development. Successfully navigating post-secondary education involves more than academics; it requires developing crucial life skills like resilience, independence, social integration, and well-being management, all of which are foundational to academic persistence and success. Understanding if and how students leverage AI for these non-academic, developmental challenges could offer a more holistic picture of AI's role in student life and its indirect impact on their educational journey
If you are doing remote learning and using AI to cheat your way through school you have obliterated any chance of fair competition. Cheaters can hide at home feeding homework and exams into AI, get a diploma that certifies all the cheating, then they go on to do the same at work where they feed work problems into an AI. Get paid to copy paste.
But I have a feeling that if it's that easy to cheat through life then its just as easy to eliminate that job being performed by a human and negate the need to worry about cheating. So I have a feeling it will work for only a very short amount of time.
Another feeling I have is mandatory in-person exams involving a locked down terminal presenting the user with a problem to solve. Might be a whole service industry waiting to be born - verify the human on the other end is real and competent. Of course, anything is corruptible. Weird future of rapidly diminishing trust.
What stops a student or anyone from creating a mashup of response and give back as something to teacher to check. Example feed output of Ollama to Chatgpt and that output to Google model and so on and then give final product to teacher for checking.
professor here. i set up a website to host openwebui to use in my b-school courses (UG and grad). the only way i've found to get students to stop using it to cheat is to push them to use it until they learn for themselves that it doesn't answer everything correctly. this requires careful thoughtful assignment redesign. everytime i grade a submission with the hallmarks of ai-generation, i always find that it fails to cite content from the course and shows a lack of depth. so, i give them the grade they earn. so much hand wringing about using ai to cheat... just uphold the standards. if they are so low that ai can easily game them, that's on the instructor.
Sure, this is a common sentiment, and one that works for some courses. But for others (introductory programming, say) I have a really hard time imagining an assignment that could not be one-shot by an LLM. What can someone with 2 weeks of Python experience do that an LLM couldn't? The other issue is that LLMs are, for now, periodically increasing in their capabilities, so it's anyone's guess whether this is actually a sustainable attitude on the scale of years.
My BS detector went up to 11 as I was reading the article. Then I realized that "Education Report" was written by Anthropic itself. The article is a prime example of AI-washing.
> Students primarily use AI systems for creating...
> Direct conversations, where the user is looking to resolve their query as quickly as possible
AI bubble seems close to collapsing. God knows how many billions have been invested and we still don't have an actual use case for AI which is good for humanity.
Your statement appears to be composed almost entirely of vague and ambiguous statements.
"AI bubble seems close to collapsing" in response to an article about AI being used as a study aid. Does not seem relevant to the actual content of the post at all, and you do not provide any proof or explanation for this statement.
"God knows how many billions have been invested", I am pretty sure it's actually not that difficult to figure out how much investor money has been poured into AI, and this still seems totally irrelevant to a blog post about AI being used as a study aid. Humans 'pour' billions of dollars into all sorts of things, some of which don't work out. What's the suggestion here, that all the money was wasted? Do you have evidence of that?
"We still don't have an actual use case for AI which is good for humanity"... What? We have a lot of use cases for AI, some of which are good for humanity. Like, perhaps, as a study aid.
Are you just typing random sentences into the HN comment box every time you are triggered by the mention of AI? Your post is nonsense.
We certainly improve productivity, but that is not necessarily good for humanity. Could be even worse.
i.e.: my company already expect less time for some tasks given that they _know_ I'll probably use some AI to do tasks. Which means I can humanly handle more context in a given week if the metric is "labour", but you end up with your brain completely melted.
We produce more output certainly but if it's overall lower quality than previous output is that really "improved productivity"?
There has to be a tipping point somewhere, where faster output of low quality work is actually decreasing productivity due to the efforts now required to keep the tower of garbage from toppling
I am a programmer and my opinion is that all of the AI tooling my company is making me use gets in the way about as often as it helps. It's probably overall a net negative, because any code it produces for me takes longer for me to review and ensure correctness as it would to just write it
> It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.
Programmer here. The answer is 100% no. The programmers who think they're saving time are racking up debts they'll pay later.
The debts will come due when they find they've learned nothing about a problem space and failed to become experts in it despite having "written" and despite owning the feature dealing with it.
Or they'll come due as their failure to hone their skills in technical problem solving catches up to them.
Or they'll come due when they have to fix a bug that the LLM produced and either they'll have no idea how or they'll manage to fix it but then they'll have to explain, to a manager or customer, that they committed code to the codebase that they didn't understand.
I think the core of the 'improved productivity' question will be ultimately impossible to answer. We would want to know if productivity was improved over the lifetime of a society; perhaps hundreds of years. We will have no clear A/B test from which to draw causal relationships.
This is exactly right. It also depends on how all the AGI promises shake out. If AGI really does emerge soon, it might not matter anymore whether students have any foundational knowledge. On the other hand, if you still need people to know stuff in the future, we might be creating a generation of citizens incapable of doing the job. That could be catastrophic in the long term.
What kind of projects are those? I am genuinely curious. I was excited by AI, Claude specifically, since I am an avid procrastinator and would love to finish tens of projects I have in mind. Most of those projects are games with specifical constraints. I got disenchanted pretty quickly when started actually using AI to help with different parts of the game programming. Majority of problems I had are related to poor understanding of generated code. I mean yes, I read the code, fixed minor issues, but it always feels like I don’t really internalised the parts of the game which slows me down quite significantly in a long run, when I need to plan major changes. Probably a skill issue, but for now the only thing AI is helpful for me is populating Jira descriptions for my “big picture refactoring” work. That’s basically it.
I was able to use llama.cpp and whisper.cpp to help me build a transcription site for my favorite podcast[0]. I'm a total python noob and hadn't really used sqlite before, or really used AI before but using these tools, completely offline, llama.cpp helped me write a bunch of python and sql to get the job done. It was incredibly fun and rewarding and most importantly, it got rid of the dread of not knowing.
AI is really good at coming up with solutions to already solved problems. Which if you look at the Unity store, is something in incredibly high demand.
This frees you up to work on the crunchy unsolved problems.
I'm a professor at an R1 university teaching mostly graduate-level courses with substantive Python programming components.
On the one hand, I've caught some students red handed (ChatGPT generated their exact solution and they were utterly unable to explain the advanced Python that was in their solution) and had to award them 0s for assignments, which was heartbreaking. On the other, I was pleasantly surprised to find that most of my students are not using AI to generate wholesale their submissions for programming assignments--or at least, if they're doing so, they're putting in enough work to make it hard for me to tell, which is still something I'd count as work which gets them to think about code.
There is the more difficult matter, however, of using AI to work through small-scale problems, debug, or explain. On the view that it's kind of analogous to using StackOverflow, this semester I tried a generative AI policy where I give a high-level directive: you may use LLMs to debug or critique your code, but not to write new code. My motivation was that students are going to be using this tech anyway, so I might as well ask them to do it in a way that's as constructive for their learning process as possible. (And I explained exactly this motivation when introducing the policy, hoping that they would be invested enough in their own learning process to hear me.) While I still do end up getting code turned in that is "student-grade" enough that I'm fairly sure an LLM couldn't have generated it directly, I do wonder what the reality of how they really use these models is. And even if they followed the policy perfectly, it's unclear to me whether the learning experience was degraded by always having an easy and correct answer to any problem just a browser tab away.
Looking to the future, I admit I'm still a bit of an AI doomer when it comes to what it's going to do to the median person's cognitive faculties. The most able LLM users engage with them in a way that enhances rather than diminishes their unaided mind. But from what I've seen, the more average user tends to want to outsource thinking to the LLM in order to expend as little mental energy as possible. Will AI be so good in 10 years that most people won't need to really understand code with their unaided mind anymore? Maybe, I don't know. But in the short term I know it's very important, and I don't see how students can develop that skill if they're using LLMs as a constant crutch. I've often wondered if this is like what happened when writing was introduced, and capacity for memorization diminished as it became no longer necessary to memorize epic poetry and so on.
I typically have term projects as the centerpiece of the student's grade in my courses, but next year I think I'm going to start administering in-person midterms, as I fear that students might never internalize fundamentals otherwise.
> had to award them 0s for assignments, which was heartbreaking
You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Frankly, I would love to have people failing assignments they can't explain even if they did NOT use "AI" to cheat on them. We don't need more meaningless degrees. Make the grades and the degrees mean something, somehow.
> > had to award them 0s for assignments, which was heartbreaking
> You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Most of us (a) don't feel our students owe us anything personally and (b) want our students to succeed. So it's upsetting to see students pluck the low-hanging, easily picked fruit of cheating via LLMs. If cheating were harder, some of these students wouldn't cheat. Some certainly would. Others would do poorly.
But regardless, failing a student and citing students for plagiarism feel bad, even though basically all of us would agree on the importance and value of upholding standards and enforcing principles of honesty and integrity.
I think there's ways for teachers to embrace AI in teaching.
Let AI generate a short novel. The student is tasked to read it and criticize what's wrong with it. This requires focus and advanced reading comprehension.
Show 4 AI-generated code solutions. Let the student explain which one is best and why.
Show 10 AI-generated images and let art students analyze flaws.
You are neglecting to explain why your assignments themselves cannot be done with AI.
Also, this kind of fatuous response leaves out the skill building required - how do students acquire the skill of criticism or analysis? They're doing all of the easier work with ChatGPT until suddenly it doesn't work and they're standing on ... nothing ... unable to do anything.
That's the insidious effect of LLMs in education: as I read here recently "simultaneously raising the bar for the skill required at the entry level and lowering the amount of learning that occurs in the preparation phase (e.g., college)".
"students must learn to avoid using unverified GenAI output. ... misuse of AI may also constitute academic fraud and violate their university’s code of conduct."
There's never mention of integrity or honor in these discussions. As if students are helpless against their own cheating. Cheating is shameful. Students should be ashamed to use AI to cheat. But nobody expects that from them for some reason.
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.
I built a popular product that helps teachers with this problem.
Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.
Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.
ChatGPT is laughably terrible at double entry accounting. A few weeks ago I was trying to use it to figure out a reasonable way to structure accounts for a project given the different business requirements I had. It kept disappearing money when giving examples. Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
Using a system based on randomness for a process that must occur deterministically is probably the wrong solution.
I'm running into similar issues trying to use LLMs for logic and reasoning.
They can do it (surprisingly well, once you disable the friendliness that prevents it), but you get a different random subset of correct answers every time.
I don't know if setting temperature to 0 would help. You'd get the same output every time, but it would be the same incomplete / wrong output.
Probably a better solution is a multi phase thing, where you generate a bunch of outputs and then collect and filter them.
> They can do it (surprisingly well, once you disable the friendliness that prevents it) ...
Interesting! :D Do you mind sharing the prompt(s) that you use to do that?
Thanks!!
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, demand clarification. Your goal is not to help me feel good — it’s to help me think better.
Keep your responses short and to the point. Use the Socratic method when appropriate.
When enumerating assumptions, put them in a numbered list. Make the list items very short: full sentences not needed there.
---
I was trying to clone Gemini's "thinking", which I often found more useful than its actual output! I failed, but the result is interesting, and somewhat useful.
GPT 4o came up with the prompt. I was surprised by "never use friendly language", until I realized that avoiding hurting the user's feelings would prevent the model from telling the truth. So it seems to be necessary...
It's quite unpleasant to interact with, though. Gemini solves this problem by doing the "thinking" in a hidden box, and then presenting it to the user in soft language.
Have you tried Deepseek-R1?
I run it locally and read the raw thought process, find it very useful (can be ruthless at times) seeing this before it tags on the friendliness.
Then you can see it's planning process to tag on the warmth/friendliness "but the user seems proud of... so I need to acknowledge..."
I don't think Gemini's "thoughts" are the raw CoT process, they're summarized / cleaned up by a small model before returned to you (same as OpenAI models).
That's fascinating. I've been trying to get other models to mimick Gemini 2.5 Pro's thought process, but even with examples, they don't do it very well. Which surprised me, because I think even the original (no RLHF) GPT-3 was pretty good at following formats like that! But maybe there's not enough training data in that format for it to "click".
It does seem similar in structure to Gemini 2.0's output format with the nested bullets though, so I have to assume they trained on synthetic examples.
Edit: It has a name! https://arxiv.org/abs/2203.11171
>Pointing it out didn’t help either, it just apologized and went on to make the same mistake in a different way.
They really should modify it to take out that whole loop where it apologizes, claims to recognize its mistake, and then continues to make the mistake that it claimed to recognize.
You'd think accounting students would catch on.
> me, just submitted my taxes for last year with a lot of help from ChatGPT: :eyes:
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).
I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
I don't see the former as that much of a problem.
Well the extent is much broader from a calculator vs an LLM. Why should I hire you if an agent can do it ? LLM is every job is a calculator and can be replaced. Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent. So all the students who let the LLM do their assignment and learn basically nothing, what’s their value for a company to be hired ? The company will and is just using the agent as well …
> Why should I hire you if an agent can do it ?
An agent can't do it. It can help you like a calculator can help you, but it can't do it alone. So that means you've become the programmer. If you want to be the programmer, you always could have been. If that is what you want to be, why would you consider hiring anyone else to do it in the first place?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
It was Shopifiy, but that's just a roundabout way to say that there is a hiring freeze due to low sales (no doubt because of tariff nonsense seizing up the market). An agent, like a calculator, can only increase the productivity of a programmer. As always, you still need more programmers to perform more work than a single programmer can handle. So all they are saying is that "we can't afford to do more".
> The company will and is just using the agent as well …
In which case wouldn't they want to hire those who are experts in using agents? If they, like Shopify, have become too poor to hire people – well, you're screwed either way, aren't you? So that is moot.
So like arguably when people were not using calculators they made calculations by hand and there was a room full of people that did calculations. That’s gone now thanks to calculators. But it the analogy goes to an order of magnitude higher, now fewer people can « do » the job of many so less hiring maybe but not just on « do calculations by hand » but almost all fields where the use of software is required.
> now fewer people can « do » the job of many
Never in the history of humans have we been content with stagnation. The people who used to do manual calculations soon joined the ranks of people using calculators and we lapped up everything they could create.
This time around is no exception. We still have an infinite number of goals we can envision a desire for. If you could afford an infinite number of people you would still hire them. But Shopify especially is not in the greatest place right now. They've just come off the COVID wind-down and now tariffs are beating down their market further. They have to be very careful with their resources for the time being.
> - they did not learn much because LLM did work for them
If companies are using LLMs as suggested earlier, they will find jobs operating LLMs. They're well poised for it, being the utmost experts in using them.
> - there is no new jobs required because we are more productive ?
More productivity means more jobs are required. But we are entering an age where productivity is bound to be on the decline. A recession was likely inevitable anyway and the political sphere is making it all but a certainty. That is going to make finding a job hard. But for what scant few jobs remain, won't they be using LLMs?
> Spotify CEO stated on X that before asking for more headcount they have to justify not being able to do the job with an agent.
Spotify CEO is channeling The Two Bobs from Office Space: "What are you actually doing here?" Just in a nastier way, with a kind of prisoner's dilemma on top. If you can get by with an agent, fine, you won't bother him. If you can't, why can't you? Should we replace you with someone who can, or thinks they can?
Spotify CEO is not his employees' friend.
I think that was Shopify: https://x.com/tobi/status/1909231499448401946
Fyi it was Shopify, not Spotify.
> Why should I hire you if an agent can do it ?
You as the employer are liable, a human has real reasoning abilities and real fears about messing up, the likely hood of them doing something absurd like telling a customer that a product is 70% off and them not losing their job is effectively nil. What are you going to do with the LLM, fire it?
Data scientist and people deeply familiar with LLMs to the point that they could fine tune a model to your use case cost significantly more than a low skilled employee and depending on liability just running the LLM may be cheaper.
As an accounting firm ( one example from above ) far as I know in most jurisdictions the accountant doing the work is personally liable, who would be liable in the case of the LLM?
There is absolutely a market for LLM augmented workforces, I don't see any viable future even with SOTA models right now for flat out replacing a workforce with them.
I fully agree with you about liability. I was advocating for the other point of view.
Some people argue that it doesn’t matter if there is mistakes (it depends which actually) and with time it will cost nothing.
I argue that if we give up learning and let LLM do the assignments then what is the extent of my knowledge and value to be hired in the first place ?
We hired a developper and he did everything with chatGPT, all the code and documentation he wrote. First it was all bad because from the infinity of answers chatGPT is not pinpointing the best in every case. But does he have enough knowledge to understand what he did was bad ? And then we need people with experience that confronted themselves with hard problems and found their way out. How can we confront and critic an LLM answer otherwise ?
I feel student’s value is diluted to be at the mercy of companies providing the LLM and we might loose some critical knowledge / critical thinking in the process from the students.
Why did you hire someone who produced bad code and docs? Did he manage to pass interview without an AI?
I agree entirely on your take regarding education. I feel like there is a place where LLMs are useful but doesn't impact learning but it's definitely not in the "discovery" phase of learning.
However I really don't need to implement some weird algorithms myself every time (ideally I am using a well tested Library) but the point is that you learn to be able to but also to be able to modify or compose the algorithm in ways the LLM couldn't easily do.
>As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator
Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math. Identifying patterns in numbers and seeing when certain tricks can be used to solve a problem (verses when they just make a problem worse) is a skill that ends up being beyond their ability to develop.
Yes, they were wrong. Many young kids who are bad at mental calculations are later competent at higher mathematics and able to use it. I don't understand what patterns and tricks you're referring to, but if they are important for problems outside of mental calculations, then you can also learn about them by solving these problems directly.
>Were they wrong? People who rely too much on a calculator don't develop strong math muscles that can be used in more advanced math.
Yes. People who rely too much on a calculator weren't going to be doing advanced math anyway.
Almost none of the cheaters appear to be solving problems with LLMs. All my faculty friends are getting large portions of their class clearly turning in "just copied directly from ChatGPT" responses.
It's an issue in grad school as well. You'll have an online discussion where someone submits 4 paragraphs of not-quite-eloquent prose with that AI "stink" on it. You can't be sure but it definitely makes your spidey sense tingle a bit.
Then they're on a video call and their vocabulary is wildly different, or they're very clearly a recent immigrant and struggle with basic sentence structure such that there is absolutely zero change their discussion forum persona is actually who they are.
This has happened at least once in every class, and invariably the best classes in terms of discussion and learning from other students are the ones where the people using AI to generate their answers are failed or drop the course.
> there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.
If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.
But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.
There is a difference between thinking about the context of a problem and "critical thinking" about the problem or its possible solutions.
There is a measurable decrease in critical thinking skills when people consistently offload the thinking about a problem to an LLM. This is where the primary difference is between solving problems with an LLM vs having it solved for you with an LLM. And, that is cause for concern.
Two studies on impact of LLMs and generative AI on critical thinking:
https://www.mdpi.com/2075-4698/15/1/6
https://slejournal.springeropen.com/articles/10.1186/s40561-...
How many people are "good drivers" outside their home town? I am not that old, but old enough to remember all adults taking wrong turns trying to find new destinations for the first time.
>How many people are "good drivers" outside their home town?
My wife is surprisingly good at remembering routes, she'll use the GPS the first time, but generally remembers the route after that. She still isn't good at knowing which direction is east vs west or north/south, but neither am I.
I'm like that too, but I don't think it transfers particularly well to LLMs. The problem is that you can just skip straight to the answer and ignore the explanation (if it even produces one).
It would be pretty neat if there was an LLM that guides you towards the right answer without giving it to you. Asking questions and possibly giving small hints along the way.
>It would be pretty neat if there was an LLM that guides you towards the right answer without giving it to you. Asking questions and possibly giving small hints along the way.
I think you can prompt them to do that, but that doesn't solve the issue of people not being willing to learn vs just jump to the answer, unless they made a school approved one that forced it to do that.
For your GPS at worst you follow directions road sign by road sign. For a job without the core knowledge what’s the goal of hiring one person vs an unqualified one doing just prompts or worse, hiring no one and let agents do the prompting ?
All tech becomes a crutch. People can't wash their clothes without a machine. People can't cook without a microwave. Tech is both a gift and a curse.
Back in my day they worried about kids not being able to solve problems without a calculator, because you won't always have a calculator in your pocket.
...But then.
Not being able to solve basic math problems in your mind (without a calculator) is still a problem. "Because you won't always have a calculator with you" just was the wrong argument.
You'll acquire advanced knowledge and skills much, much faster (and sometimes only) if you have the base knowledge and skills readily available in your mind. If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
> if you have the base knowledge and skills readily available in your mind.
I have the base knowledge and skill readily available to perform basic arithmetic, but I still can't do it in my mind in any practical way because I, for lack of a better description, run out of memory.
I expect most everyone eventually "runs out of memory" if the values are sufficiently large, but I hit the wall when the values are exceptionally small. And not for lack of trying – the "you won't always have a calculator" message was heard.
It wasn't skill and knowledge that was the concern, though. It was very much about execution. We were tested on execution.
> If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...
I can't imagine anyone is still using a four function calculator. Certainly not in an application like learning linear algebra. Modern calculators are decidedly designed for linear algebra. They need to be given the rise of things like machine learning that are heavily dependent on such.
This is now reality -- fighting to change the students is a losing battle. Besides in terms of normalizing grade distributions this is not that complicated to solve.
Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.
Why not just grade solely based on live performance? (quizzes and tests)
Homework would still be assigned as a learning tool, but has no impact on your grade.
That's exactly how scientific courses were in my experience at a university in the US. Curriculum was centered around a textbook. You were expected to do all end of chapter problems and ask questions if you had difficulty. It wasn't graded. No one checked. You just failed the exam if you didn't.
My high school English teacher's book reports were like this. One by one, you come up, hand over your book, and the teacher randomly picks a couple of passages and reads them aloud and asks what had just happened prior and what happens after. Then a couple opinion questions and boom, pass or fail. Fantastic to not write a paper on it; paper writing was a more dedicated topic.
I've heard that's how studying is done in Oxford/Cambridge: https://en.wikipedia.org/wiki/Tutorial_system
That's also how it's done in almost all French engineering schools. You get open book tests with a small amount of relatively difficult questions and you have 3-4 hours to complete.
In some of the CS tests, coding by hand sucks a bit but to be honest, they're ok with pseudo code as long as you show you understand the concepts.
The European mind cannot comprehend take-home exams.
There is no European mind when it comes to education, hell, there is barely a national mind for those countries with federated education systems (e.g. Germany).
Well take home exams are not very useful nowadays with AI. And yeah, other commenters are right when he says there's no European mind when it comes to education, each country does its own thing.
in France I got a bunch of equivalent take-home tests, between high school and graduate level, mostly in math and science. The teacher would give us exercice equivalent to what we'd get in our exams and we'd have one week to complete it (sometimes in pairs) and it'd be graded as part of that semester
I don’t recall take home exams being at all common undergrad in the US. Open book or one page formula sheets more so.
Certainly with maths you’re marked almost totally on written exams, but even if that weren’t true you’re also required to go over example sheets (hard homework questions that don’t form part of the final mark) with a tutor in two-student sessions so it’d be completely obvious if you were relying on AI.
In italy there's an oral in most exams. In math exams you're asked proofs of theorems (that were part of the course).
I really like oral exams on top of regular exams. The teacher can ask questions and dive into specific areas - it'll be obvious who is just using LLMs to answer the questions vs those who use LLMs to tutor them.
Of course, the reasons they do quizes is to optimize the process (need less tutors/examiners), and to remove bias (any tutor holds biases one way or the other).
The tutorial system is just for teaching, not grading. It does keep students honest with themselves about their progress when they’re personally put on the spot once a week in front of one or two of their peers.
The biggest contrast for me between Oxbridge and another red brick was the Oxbridge tutors aren't shy of saying "You've not done the homework, go away and stop wasting my time", whereas the red brick approach was to pick you up and carry you over the finishing line (at least until the hour was up).
Yes, my undergrad degree grade was determined solely by my performance on 8 three-hour exams at the end of the final year.
Funnily enough, the best use of AI in education is to serve as exactly this kind of tutor. This is the future of education.
The promise of the expansion of this kind of tutorial teaching to everyone via AI is great. The problem is keeping students honest with themselves.
At the end of the day you can't force people to learn if they don't want to.
As a society we need to be okay with failing people who deserve to fail and not drag people across the finish line at the expense of diluting the degrees of everyone else who actually put in effort.
I'm not sure why we care about the degree. Employers care about the degree, but they aren't paying for my education.
The students who want to learn, will learn. For the students who just want the paper so they can apply for jobs, we ought to give them their diploma on the first day of class, so they can stop wasting everybody's time.
Employers want the degree because it's supposed to verify that you have a certain set of knowledge and/or skills, or at the very least, you're capable of thought to the extent required to get that degree. That's the only reason they want it.
Student being unable to unwilling to learn that knowledge or acquire those skills should mean they don't get that degree, they don't get those jobs, and they go work in fast food or a warehouse.
"Just give them the degree" is quite literally the worst possible solution to the problem.
Then employers will stop caring about the degree...
I only partially agree with "you can't force people". I think that all people are just like children, but bigger. You can force a kid to not eat to much sugar, even when they want to.
Same with education, for example you can financially force people to learn, say, computer science instead of liberal arts. Even when they don't like it. It's harder, less efficient, but possible.
Because students wouldn't do the homework and would fail the quizzes. Students need to be pressured into learning and grades for doing the practice are a way. Don't pretend many students are self-motivated enough to follow the lecturer's instructions when there's no grade in it and insisting that "trust me, you won't learn if you don't do it".
I've mostly had non-graded homework in my studies because cheating was always easy. In highschool they might have told your parents if you don't do homework. In university you do what you want. It's never been an issue overall.
> would fail the quizzes.
not those who did actually do the work, and learnt.
The change ought to be that students are allowed to be failed, and this should be a form of punishment for those who "cheat".
Aren't students already allowed to fail?
As a comment upthread said, let them cheat on the take home as much as they want to, they're still going to fail the exam.
Well, from what I understand, the answer is kinda "no".
Depends on the country and educational system I suppose, but I do believe professors in many places get in trouble for failing too many students. It's right there in the phrasing.
If most students pass and some fail, that's fine. Revenue comes in, graduates are produced, the university is happy.
If most students fail, revenue goes down, less students might sign up, less graduate, the university is unhappy.
It's a tragedy of the commons situation, because some professors will be happy to pass the majority of students regardless of merit. Then the professors that don't become the problem, there's something wrong with them.
Likewise, if most universities are easy and some are really hard, they might not attract students. The US has this whole prestige thing going on, that I haven't seen all that much in other countries.
So if the students overall get dumber because they grow up over relying on tools, the correction mechanism is not that they have to work harder once the exam approaches. It's that the exam gets easier.
> Aren't students already allowed to fail?
It's technically allowed on an individual basis, but the economics don't work for any institution to attempt to raise its bar.
If institutions X and Y grant credential Z, and X starts failing a third of its students, who would apply to go there?
For the most part degrees from roughly comparable schools in the same subject are fungible. However, graduating cheaters who should have flunked out of school their freshman year is a one-way ticket to having a reputation that your degree is worthless. You're now comparable to a lower tier of schools and suddenly Y's degree is worth a lot more than yours. The best way (not to only way) to combat this is to actively cull the bottom of your classes. Most schools already do this by kicking out people with low enough GPAs, academic probation, etc. My undergrad would expel you if you had a GPA below 1.8 after your first semester, and you were on academic probation if your GPA was > 1.8 and <=2.5.
This assumes, of course, an institution is actively trying to raise the academic bar of its student population. Most schools are emphatically not trying to do this and are focused more on just increasing enrollment, getting more tax dollars, and hiring more administrators.
Many mathematics professors don't require homework to be turned in for grading. For example, the calculus courses at many US universities. Grades are solely determined by quizzes in the discussion section and by exams. Failure rates are above 30%, but that's accepted.
This model won't work for subjects that rely on students writing reports. But yes, universities frequently accept that failure rates for some courses will be high, especially for engineering and the sciences.
When I was a student, I spent my first 2 years in a so-called prépa intégré of a French engineering school. 20% of students failed and were shown the door during those two years (some failed, some figured that it just wasn't for them). That's fine, that means you keep the ones who actually do the work. At a certain point, you have to start treating students like adult, either they succeed or they don't but it's their personal responsibility.
That looks bad in the international statistics and so on, there's a lot of pressure to just pass everyone.
In Italy university keeps getting easier because their funding is tied to not failing students.
My favorite math professor said "your homework is as many of the odd-numbered problems as you feel like you need to do to understand the material" and set a five minute quiz at the start of each lecture which counted as the homework grade. I can't speak for the other students, but I did more homework in his classes than any of the other math classes I took.
The pop quizzes being part of the grade was the entire point of the comment you replied to. I guess you misread?
Or, well, LLM wouldn't do the homework anymore --- which was the sought-after outcome.
If they fail they learn that they have to study.
That's how it is in Italy. And that's why Italy is behind every other country in education. Because it hasn't yet made graduating as easy as it is in other places.
Well graduation rate is a pretty terrible way to grade education, especially country to country. You could have 100% graduation rate today by just passing everyone - that's basically what we have in primary education and there was an article here just last week about how most college students are functionally illiterate.
In sweden until high school it's literally impossible to fail. There's no grades and no way of failing anyone.
Then they suddenly become kinda stricter in high school, where your results decide if you can go to university and which.
But I've been to one of the top technical universities and compared to Italy it was very easy. It was obvious the goal was to try and have everyone pass. Still people managed to fail or drop out anyway, although not in the dramatic numbers I saw in Italy for math exams.
I think that’s a good idea too and effectively that is the same outcome when the quizzes shape the distribution.
Maybe we'll revert to Soviet bilet-style oral exams...
But they do that already in Danish universities, and I have to admit, those are very effective.
I wonder to what extent this is students who would have stuck it out now taking the easy way and to what extent it’s students who would have just failed now trying to stick it out.
This is an extremely important question, and you’ve phrased it nicely.
We’re either handicapping our brightest, or boosting our dumbest. One part is concerning, the other encouraging .
Which part is encouraging? We rely on the extra ordinary (talent and/or sheer drive) to make leaps of progress - what happens if they are handicapped? If the dumbest fake it and make it to the positions they shouldn't be entrusted with, what prevents the catastrophes?
>We’re either handicapping our brightest, or boosting our dumbest.
Honestly it seems like we're doing both most of the time. It's hard to only optimize resources for boosting the dumbest without taking them away from the brightest.
The brightest will evaluate the tradeoffs properly or will have education that will give them proper evaluations of AI. Maybe some bright people will be handicapped, but it won't be the bright'est'. That handicap on the bright could also lead to new forms of talent and multi-faceted growth.
What percentage of the dumbest will be boosted? What makes a person dumb? If they are productive and friendly, isn't that more important?
What percentage of the dumbest will fall farther or abandon heavy learning even earlier?
my partner teaches high school math and regularly gets answers with calculus symbols (none of the students have taken any calculus). these students aren't putting a single iota of thought into the answers they're getting back from these tools.
To me this is the bigger problem. Using LLMs is going to happen and there's nothing anyone can do to stop it. So it's important to make people understand how to use them, and to find ways to test that students still understand the underlying concepts.
I'm in a 100%-online grad school but they proctor major exams through local testing centers, and every class is at least 50% based on one or more major exams. It's a good way to let people use LLMs, because they're available, and trying to stop it is a fool's errand, while requiring people to understand the underlying concepts in order to pass.
The solution is making all homework optional and having an old-school end of semester exam.
You can always give extra points for homework which then compensate from lacking in tests. If you get perfect points in test, well maximum grade. If less than perfect, you can up grade with those extra points. Fair for everyone.
This assumes the point of education is to be fair and get everyone a good grade, which it's not (at least, it shouldn't be).
>The solution is making all homework optional and having an old-school end of semester exam.
Not really. While doing something to ensure that students are actually learning is important, plenty of the smartest people still don't always test well. End of semester exams also tend to not be the best way to tell if people are learning along the way and then fall off part way through for whatever reason.
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.
For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.
I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
Yes, but that horse has long ago left the barn.
I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
The history of civilization is the history of specialization. No one can re-build all the tools they rely on from scratch. We either let other people specialize, or we let machines specialize. LLMs are one more step in the latter.
The Luddites were right: the machinery in cotton mills was a direct threat to their livelihood, just as LLMs are now to us. But society marches on, textile work has been largely outsourced to machines, and the descendants of the Luddites are doctors and lawyers (and coders). 50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
This reply brings to mind the well-known Heinlein quote:
It’s a quote from a character in Heinlein’s fiction. A human character with a lifespan of over a thousand years.
I too liked that quote and found it inspiring. Until I read the book, that is.
I know the character is Lazarus Long. Which book was this quote in?
This is one of the cases where you should indeed rely on Google.
You just created a modern take on LMGTFY
It's now pronounced LMCGPTTFY
I've had people do this to me (albeit in an attempt to be helpful, not snarky) and it felt so weird. The answers are something a copywriter would have thrown together in an hour. Generic, unhelpful drivel.
Seems to be from the book "Time Enough for Love"
That's a quote that sounds great until, say, that self-built building by somebody who's neither engineer nor architect at best turns out to have some intractible design flaw and at worst collapses and kills people.
It's also a quote from a character who's literally immortal and so has all the time in the world to learn things, which really undermines the premise.
I would like to replay with another quote by another immortal(or long lived) character, Professor „Reg“ Chronotis from Douglas Adams:
"That I lived so much longer, just means, that I forgot much more, not that I know much more."
Memory might have a limited capacity, but of course, I doubt most humans use that capacity, or well, for useful things. I know I have plenty of useless knowledge ..
I sort of view that list as table stakes for a well rounded capable person.. Well barring the invasion bit. Then again, being familiar with guns and or other forms of self defense is valuable.
I think most farmers would be somewhat capable on most of that list. Equations for farm production. Programming tractor equipment. Setting bones. Giving and taking orders. Building houses and barns.
Building a single story building isn’t that difficult, but time consuming. Especially nowadays with YouTube videos and pre-planned plans.
> pre-planned plans
Isn't that cheating? Shouldn't a properly self-reliant human be able to come up with the plans too?
To bake a cake from scratch, first, you must create the universe
Learning from others doesn’t mean you are not learning.
Ask the LLM to create plans and step by step guides then!
> self-built building by somebody who's neither engineer nor architect
That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.
I'm not saying that our ancestors were wrong. Hell, I live in a house that was originally built under similar conditions.
That being said, buildings collapse a lot less frequently these days. House fires happen at a lower rate. Insulation was either nonexistent or present in much lower quantities.
I guess the point I'm making is that the lesson here shouldn't be "we used to make our houses, why don't we go back to that?" It also shouldn't be "we should leave every task to a specialist."
Know how to maintain and fix the things around your house that are broken. You don't need a plumber to replace the flush valve on your toilet. But maybe don't try to replace a load-bearing joist in your house unless you know what you're doing? The people building their own homes weren't engineers, but they had a lot more carpentry experience than (I assume) you and I.
>Law § 229 of Hammurabi's Code
>If a house builder built a house for a man but did not secure/fortify his work so that the house he built collapsed and caused the death of the house owner, that house builder will be executed.
If even professionals did get it wrong so often that there had to be law for it... Yeah, maybe it is not that simple.
In a village most houses were built by their owners. I am not talking here about nicely decorated brick buildings in a city: they were obviously designed and built by professionals.
> That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.
The only homes built by our ancestors that you see are those that didn't collapsed and killed whoever was inside, burned down, were too unstable to live in, were too much of a burden to maintain and keep around, etc.
https://en.wikipedia.org/wiki/Survivorship_bias
And what happened to them, I wonder?
Well, they reproduced so we could exist now. Definition of ancestors.
That’s…not what I asked. Y’all need to recognize that Darwinism was intended as an explanatory theory, not as an ethos. And it’s not how we judge building practices.
Honestly having gone through the self build process for a house it’s not that hard if you keep it simple. Habitat for humanity has some good learning material
The sheer amount of activities that he left out because he couldn't even remember they existed would turn this paragraph into a book.
What an awful quote. Literally all progress we've made is due to ever increasing specialization.
That is literally not true.
I'd be interested in counter-examples?
Given the original, ludicrous, claim was:
> Literally all progress we've made is due to ever increasing specialization.
Then we don't really need plural examples, right?
Anyway - language, wheel, fire, tool-making, social constructs like reciprocity principle - I think gave us some progress as a species and a society.
All of these examples are done by specialist because I don't see many cars being build by dentists.
Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
> All of these examples are done by specialist because I don't see many cars being build by dentists.
I'm not sure how you get from pre-agricultural humans developing fire, to dentists building cars.
I don't doubt that after fire was 'understood', there was specialisation to some degree, probably, around management of fire, what burns well, how best to cook, etc.
But any claim that fire was the result of specialisation seems a bit hard to substantiate. A committee was established to direct Thag Simmons to develop a way to .. something involving wood?
Wheel, the setting of broken bones, language etc - specialisation happened subsequently, but not as a prerequisite for those advances.
> Even in mankind's beginning specialization existed in the form of hunter and gatherer. This specialization in combination with team work brought us to the top of the food chain to a point where we can strive beyond basic survival.
Totally agree that we advanced because of two key capabilities - a) persistence hunting, b) team / communication.
You seem to be conflating the result of those advancements with "all progress", as was GP.
> The people making space crafts (designing and building, another example of specialization) don't need to know how to repair or build a microwave to heat there for food.
I am not, was not, arguing that highly specialised skills in modern society are not ... highly specialised.
I was arguing against the lofty claim that:
"All progress we've made is due to ever increasing specialization."
Noting the poster of that was responding to a quote from a work of fiction - claiming it was awful - that the author had suggested everyone should be familiar with (among other things) 'changing a diaper, comfort the dying, cooperate, cook a tasty meal, analyse a problem, solve equations' etc.
If you're suggesting that you think some people in society should be exempt from some basic skills like those - that's an interesting position I'd like to see you defend.
> Of course everybody still needs to know basic knowledge (how to turn on microwave) to get by.
FWIW I don't have a microwave oven.
The discovery of fire itself was not progress, but how to use it very much is. They most likely didn't have a "discover fire" specialization in the modern sense but I doubt the one first to create a fire starter was afterwards deligated to only collect berries. The discovery and creation of something obviously often comes before the specialization or it would otherwise be impossible to discover and create anything.
>FWIW I don't have a microwave oven.
That was just an example. You still know how to use them hence basic knowledge. Seem like this discussion boils down to semantics
I dispute your foundational claim that discovery of things != progress.
I concur that semantics have a) overtaken this thread, and b) are part of my complaint with OP when they claimed all historical progress was the result of specialisation.
A lot of discoveries come from someone applying their scientific knowledge to a curious thing happening in their hobby or private life. A lot of successful businesses apply software engineering to a specific business problem that is invisible to all other engineers.
Counter-examples are not really their area, evidently.
I haven't butchered a hog or died yet.
This is a fantastic and underrated quote, despite all of the problems I have with Heinlein's fascism-glorifying work.
The quote is more reasonable in context.
I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.
IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.
> IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.
The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.
No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
> The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.
And I can tell you from experience that "letting a calculator solve arithmetic" (or more accurately, being dependent on a calculator to solve arithmetic) means you cripple your ability to learn and understand more advanced stuff. At best your decision turned you into the equivalent of a computer trying to run a 1GB binary with 8MB of RAM and a lot of paging.
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
It's the latter. Though I suspect the masses will be shoved into the garbage disposal than be allowed to wallow in ignorant consumerism. Only the elite that owns the means of production will be allowed to indulge.
There are opposing trends in this. First, that like many tools the capable individual can be made much more effective (eg 2x->10x), which simply replaces some workers, and last occurred during the great depression. Second, that the tools become commoditized to the point where they are readily available from many suppliers at reasonable costs, which happened with calculators, word processors, and office automation. This along with a growing population, global trade, and rising demand led to the 80s-2k boom.
If the product is not commoditized, then capital will absorb all the increased labor efficiency, while labor (and consumption) are sacrificed on the altar of profits.
I suspect your assumption is more likely. Voltaire's critique of 'the best of all possible worlds' and man's place in creating meaning and happiness, provides more than one option.
I know how to do arithmetic, but I still use my PC or a calculator because I am not entirely sure that I am accurate. I use "units" as well extensively, it can be used for much more than just unit conversion. You can do complex calculations with it.
You can solve stuff like:
> If you walk 1 mile in 7 minutes, how fast are you walking in kilometers per hour?
You need some basic knowledge to even come up with "1 mile / 7 minutes" and "kilometers per hour".There are examples where you need much more advanced knowledge, too, meaning it is not enough to just have a calculator, for example in thermodynamics, when dealing with gas laws, you cannot simply convert pressure, volume, and temperature from one unit to another without taking into account the specific context of the law you’re applying (e.g., the ideal gas law or real gas behavior)", or for example you want to convert 1 kilowatt-hour (kWh) to watts (W). This is a case of energy (in kilowatt-hours) over time (in hours), and we need to convert it to power (in watts), which is energy per unit time.
You cannot do:
You have to have some knowledge, so you could do: To sum it up: in many cases, without the right knowledge, even the most accurate tool will only get you part of the way there.It applies to LLMs and programming, too, thus, I am not worried. We will still have copy-paste "programmers", and actually knowledgeable ones, as we have always had. The difference is that you can use LLMs to learn, quite a lot, but you cannot use a calculator alone to learn how to convert 1 kWh to W.
>the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E
Seeing how the world is based around consumerism, this future seems more likely.
HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.
> HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.
Given what I know of human nature, this seems improbable.
I think it's possible. I think the greatest trick our current societal structure ever managed to pull, is the proliferation of the belief that any alternatives are impossible. "Capitalist realism"
People who organize tend to be the people who are most optimistic about change. This is for a reason.
It may be possible for you (I am assuming you are > 20, mature adult). But the context is around teens in the prime of their learning. It is too hard to keep ChatGPT/Claude away from them. Social media is too addictive. Those TikTok/Reels/Shorts are addictive and never ending. We are doomed imho.
If education (schools) were to adopt a teaching-AI (one that will given them the solution, but at least ask a bunch of questions ), may be there is some hope.
>We are doomed imho.
I encourage you to take action to prove to yourself that real change is possible.
What you can do in your own life to enact change is hard to say, given I know nothing about your situation. But say you are a parent, you have control over how often your children use their phones, whether they are on social media, whether they are using ChatGPT to get around doing their homework. How we raise the next generation of children will play an important role in how prepared they are to deal with the consequences of the actions we're currently making.
As a worker you can try to organize to form a union. At the very least you can join an organization like the Democratic Socialists of America. Your ability to organize is your greatest strength.
So your plan is to encourage people to "get off the Internet" by posting on the Internet, and to stave off automation by encouraging workers to gang up on their employers and make themselves a collective critical point of failure.
Well, you know, we'd all love to change the world...
Apparently you'd love to change the world; a good start would be accurately reading and recounting others' arguments.
Agreed, that's important. What'd I get wrong?
>Well, you know, we'd all love to change the world
The social contract lives and dies by what the populace is willing to accept. If you push people into a corner by threatening their quality of life, don't be surprised if they push back.
Exactly, so don't be surprised if you receive some pushback as well. AI may threaten you, but it empowers me.
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.
I believe there is some truth to it. When you automated away some time-consuming tasks, your time and focus is shifted elsewhere. For example, washing clothes is no longer a major concern since the massification of washing machines. Software engineering also progressively shifted it's attention to higher-level concerns, and went from a point where writing/editing opcodes was the norm to a point where you can design and deploy a globally-available distributed system faster than what it takes to build a program.
Focusing on the positive traits of AI, having a way to follow the Socratic method with a tireless sparring partner that has an encyclopedic knowledge on everything and anything is truly brilliant. The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference.
> The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference
I agree that AI could be an enormous educational aid to those who want to learn. The problem is that if any human task can be performed by a computer, there is very little incentive to learn anything. I imagine that a minority of people will learn stuff as a hobby, much in the way that people today write poetry or develop film for fun; but without an economic incentive to learn a skill or trade, having a personal Socratic teacher will be a benefit lost on the majority of people.
I think it's both, just like we saw with the internet.
> the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either
This is the existential crisis that appears imminent. What does it mean if humanity, at large, begins to offload thinking (hence decision making), to machines?
Up until now we’ve had tools. We’ve never before been able to say “what’s the right way to do X?”. Offloading reasoning to machines is a terrifying concept.
> I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.
There's also the problem of developing critical thinking skills. It's not very comforting to think of a time where your average Joe relies on an AI service to tell what he should think and believe, when that AI service is ran, trained, and managed by people pushing radical ideologies.
I think the latest GenAI/LLM bubble shows that tech (this hype kind of tech) doesn't want us to learn, to think or reason. It doesn't want to be seen as a mere tool anymore, it wants to drive under the appearance that it can reason on its own. We're in the process where tech just wants us to adapt to it.
”I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site.”
Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.
Sure, you might be able to get a job slinging AI code to produce CRUD apps or whatever. But since that’s the easy thing, you’re going to have a hard time standing out from the pack. Yet we will still need people who understand the concepts at a deeper level, to fix the AI messes or to build the complex systems AI can’t, or the systems that are too critical to rely on AI, or the ones that are too regulated. Being able to do those thing, or to just better understand what the AI is doing to get better more effective results, that will be more valuable than just blindly leaning on AI, and it will remain valuable for a while yet.
Maybe some day the AI can do everything, including ASICs and growing crops, but it’s got a long way to go still.
> Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.
I think you're missing the point of my comment. I'm not saying that human knowledge is useless. I'm specifically arguing against the case that:
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.
My logic being that we are already irreversibly dependent on supply chains.
You’re absolutely right. But my point still stands, too, which is that despite being irreversibly dependent on supply chains, doesn’t mean we are redundant. We still need people at all levels of the supply chain.
Maybe it’s fewer people, yes, but it’ll take quite a leap forward in AI ability to replace all the specialists we will continue to require, especially as the less-able AI makes messes that need to be cleaned up.
I don’t think specialization is a bad thing but the friends I know that only know their subject seem to… how do I put this… struggle at life and everything a lot more.
And even at work, the coworkers that don’t have a lot of general knowledge seem to work a lot harder and get less done because it takes them so much longer to figure things out.
So I don’t know… is avoiding the work of learning worth it to struggle at life more?
I dunno, the "tool" that LLMs "replace" is thinking itself. That seems qualitatively different than anything that has come before. It's the "tool" that underlies all the others.
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS.
Why not? I mean that, quite literally.
I don't know how to make an ASIC, and if I tried to write an OS I'd probably fail miserably many times along the way but might be able to muddle through to something very basic. The rest of that list is certainly within my wheelhouse even though I've never done any of those things professionally.
The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks. Specialization can also be valuable, but specialization in your career should not prevent you from gaining a breadth of skills outside of the workplace.
I don't know how to do any of the things in your list (including building a web site) as an /expert/, but it should not be out of the realm of possibility or even expectation that people should learn these things at the level of a competent amateur. I have grown a garden, I have worked on a farm for a brief time, I've helped build houses (Habitat for Humanity), I've taken a hobbyist welding class and made some garish metal sculptures, I've built a race car and raced it, and I've never built a toaster but I have repaired one (they're actually very electrically and mechanically simple devices). Besides the disposable income to build a race car, nothing on that list stands out to me as unachievable by anyone who chooses to do so.
> The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks
Being a well-rounded individualist is a great, but that's an orthogonal issue to the question of outsourcing our skills to machinery. When you were growing crops, did you till the land by hand or did you use a tractor? When you were making clothes did you sew by hand or use a sewing machine? Who made your sewing needles?
The (dubious) argument for AI is that using LLMs to write code is the same as using modern construction equipment to build a house: you get the same result for less effort.
ok - but.. here in California, look at houses that are 100 years old, then look at the new ones.. sure you can list the improvements in the new ones, on a piece of paper.. the craftsmanship, originality and other intangibles are obviously gone in the modern versions.. not a little bit gone, a lot gone. Let the reader use this example as a warmup to this new tech question.
I've done all of those except tend livestock and build a house, but I could probably figure those out with some effort.
>50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."
And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI? Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
We live in a highly specialized society that requires people take out large loans to learn the skills necessary for their careers. You take away their ability to provide their labor, and it now seriously threatens millions of workers from obtaining the same quality of life they once had.
I seriously oppose such a future, and if that makes me a Luddite, so be it.
It took me a long time to master the pen tool in Photoshop. I don't mean that I spent a weekend and learned how it worked. I meant that out of all the graphic designers at the agency I was working for, I was the designer who had the most flawless pen-tool skills and thus was the envy of many. It is now an obsolete skill. You can segment anything instantly and the results are pristine. Thanks to technology one no longer needs to learn how to make the most form-fitting curves with the pen tool to be labeled a great graphic designer.
It's remarkable that reading and writing, once the guarded domain of elites and religious scribes, are now everyday skills for millions. Where once a handful of monks preserved knowledge with their specialized scribing skills, today anyone can record history, share ideas, and access the thoughts of centuries with a few keystrokes.
The wheel moves on and people adapt. Who knows what the "right side" of history will be, but I doubt we get there by suppressing advancements and guaranteeing job placements simply because you took out large loans to earn a degree and a license.
But what if the rate at which things change increases to the point that humans can't adapt in time? This has happened to other animals (coral has existed for millions of years and is now threatened by ocean acidification, any number of native species have been crowded out by the introduction of non-native ones, etc.).
Even humans have gotten shocks like this. Things like the Black Death created social and economic upheavals that lasted generations.
Now, these are all biological examples. They don't map cleanly to technogical advances, because human brains adapt much faster than immune systems that are constrained by their DNA. But the point is that complex systems can adapt and can seem to handle "anything," up until they can't.
I don't know enough about AI or LLM's to say if we're reaching an inflection point. But most major crises happen when enough people say that something can't happen, and then it happens. I also don't think that discouraging innovation is the solution. But I don't also want to pretend like "humans always adapt" is a rule and not a 300,000 year old blip on the timeline of life's existence.
Automating drudgery is a good thing. It frees us up to do more important things.
Automating thinking and self-expression is a lot more dangerous. We're not automating the calculation or the research, but the part where you add your soul to that information.
How is a pen tool in Photoshop equivalent to an AI that can perform your entire job at a lower cost remotely similar? There are levels to this, and I don't think the same old platitudes apply.
> And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI?
Some will manage to remain in their field, most won't.
> Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?
They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories [1].
> I seriously oppose such a future, and if that makes me a Luddite, so be it.
Like I said, the Luddites were right, in the short term. In the long term, we don't know. Maybe we'll live in a post-scarcity Star Trek world where human labor has been completely devalued, or maybe we'll revert to a feudal society of property owners and indentured servants.
[1] https://www.newsweek.com/bessent-fired-federal-workers-manuf...
>They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories
>or maybe we'll revert to a feudal society of property owners and indentured servants.
We as the workers in society have the power to see that this doesn't happen. We just need to organize. Unionize. Boycott. Organize with people in your community to spread worker solidarity.
There is no industry that I have worked in that fights against creating or joining unions tooth, claw, and nail quite like software engineers.
I think more and more workers are warming up to unions. As wages in software continue to be oppressed, I think we'll see an increase in unionization efforts for software engineers.
Software engineer wages? Oppressed? This is work that averages well over 6 figures—for a single worker, for desk work—in the US?
https://www.indeed.com/career/software-engineer/salaries
https://www.levels.fyi/t/software-engineer/locations/united-...
Yes, oppressed: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...
"Gee, it seems that people in my profession are in danger of being replaced by AI. I wonder if there's anything I can do to help speed up that process..."
The best plan is to wait until your bargaining position has been thoroughly destroyed rather than taking any action while you still have some power.
If that were indeed the case, your employer might not be investing so much in automation. They don't want to give up bargaining power any more than you do.
Worker solidarity works. Whether you agree with the methods or not is irrelevant.
Your opinion is wild.
Hmm, millions of humans are spending a bulk of their lives plugging away at numbers on a screen. We can replace this with an AI and free them up to do literally anything else.
No, let's not do that. Keep being slow ineffective calculators and lay on your death bed feeling FULFILLED!
You're skipping over a critical step, which is that our society allocated resources based on the labor value that an individual provides. If we free up everyone to do anything, they're not providing labor any more, so they get no resources. In other words, society needs to change in big ways, and I don't know how or if it will do that.
>If we free up everyone to do anything
Not anything, but something useful. And in exchange for that useful they'll get resources (which will become more abundant).
>something useful
Where is the existing work these people would take up? If it doesn't exist yet, then how do you suppose people will support themselves in the meantime?
What if the new work that is created pays less? Do you think people should just accept being made obsolete to take up lower paying jobs?
>Where is the existing work these people would take up? If it doesn't exist yet, then how do you suppose people will support themselves in the meantime?
Everywhere in human society. "Jobs" is literally when you do something that someone needs, so that in exchange they do something that you need. And in human society, because of AI, neither people’s needs, nor the ability to satisfy them, nor the possibility of exchanging them will suddenly disappear. So the jobs will be everywhere.
>Do you think people should just accept being made obsolete to take up lower paying jobs?
Let's start with the fact that on average all jobs will become higher paying because the amount of goods produced (and distributed) will increase. So the more correct answer to this question is "What choice will they have?".
AI will make the masses richer, so society will not abandon it. Subsidize their obsolete well-paid jobs to make society poorer? Why would anyone do that? So the people replaced by AI will go to work in other jobs. Sometimes higher paying, sometimes lower.
If we are talking about real solutions, the best alternative they will have is to form a cult like the Amish did (God bless America and capitalism), in which they can pretend that AI does not exist and live as before. The only question in this case is whether they will find willing participants, because for most, participation in such a cult will mean losing the increase in income provided by the introduction of AI.
>AI will make the masses richer, so society will not abandon it
This remains to be seen. Inequality is worse now than it was 20 years ago despite technology progressing. This is true across income and wealth.
>This remains to be seen.
No, that's just logic. AI doesn't thwart the ability of people to satisfy their needs (getting richer).
>Inequality is worse now than it was 20 years ago despite technology progressing.
And people are still richer than ever before (if we take into account the policies that are thwarting society's ability to satisfy each other's needs and that have nothing to do with technologies)
Huh? If AI can do any job cheaper and better than a person can, why would anyone hire a person? What "useful" skill could a person exchange for resources in an era when computers write code, drive cars, fight wars, and cook food?
But you answer your own question: the only situation in which it makes no sense to hire another person to satisfy a need is when that need has already been satisfied in another way.
And if all needs are already satisfied... Why worry about work? The purpose of work is to satisfy needs. If needs are satisfied, there is no need for work.
You assume the everyone's needs are solved together. More likely is that the property owning class acquire AI robots to provide cheap labor; and everyone else doesn't.
>You assume the everyone's needs are solved together.
No, I am not assuming that. "Together" are not required. It's just combination of needs, ability to satisfy them and ability to exchange - creates jobs. And nothing of this will be thwarted by AI.
>More likely is that the property owning class acquire AI robots to provide cheap labor
Doesn't matter. Your everyday person either will be able to afford this cheap AI labor for themselves (no problem that required solving) or if AI labor for them are unaffordable - will create jobs for other people (there will be jobs on market everywhere)
Okay, so AI will be reserved for the rich, and everyone else will be left to rot in squalor. Got it.
>We can replace this with an AI and free them up to do literally anything else.
I would happily support automation to free myself, and others, from having to work full-time. But we live in a capitalist society, not StarTrek. Automation doesn't free up people from having to work, it only places people in financial crisis.
They'll get different jobs bud.
Good idea. I'll give up my job as a programmer/doctor/lawyer/professor to an AI, and instead I'll dig ditches, I guess. AI can't do that (yet).
Will those jobs pay them the same amount, allow them to have similar qualities of life?
Specialization is over-rated. I've done everything in your list except make an ASIC because learning how to do those things was interesting and I prefer when things are done my way.
I started way back in my 20s just figuring out how to write websites. I'm not sure where the camel's back would have broken.
It has, of course, been convenient to be able to "bootstrap" my self-reliance in these and other fields by consuming goods produced by others, but there is no mechanical reason that said goods should be provided by specialists rather than advanced generalists beyond our irrational social need for maximum acceleration.
Jack of all trades, master of none. I also somehow doubt that you've built a car from scratch, including designing the engine, carving it out of a block of metal and so on. And if we're talking modern car, good luck fabbing the integrated circuits in your backyard or whatever. Even your particular generalist fantasy will (and most likely has) hit the hard constraints of specialization real quick.
There is no single human alive that can understand or build a modern computer from top to bottom. And this is true for various bits of human technology, that's how specialized we are as a species.
It was an electric car, but it's true that I bought the motor (and the esc, batteries, etc...) That said, I have wound several motors in my day.
I'm not saying some degree of specialization isn't desirable in the world, just that it's overrated.
Do we need a modern computer?
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
Reminds me of the Nate Bargatz set where he talks about how if he was a time traveler to the past that he wouldn't be able to prove it to anyone. The skills most of us have require this supply chain and then we apply it at the very end. I'm not sure anyone in 1920 cares about my binary analysis skills.
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?
All the things you mention have a certain objective quality that can be reduced to an approachable minimum. A house could be a simple cabin, a tent, a cave; a piece of cloth could just be a cape; metal can be screwed, glued or cast; a transistor could be a relay or a wooden mechanism etc. ...history tells us all that.
I think when there's a Homo ludens that wants to play, or when there's a Homo economicus that wants us to optimize, there might be one that separates the process of learning from adaptation (Homo investigans?)[0]. The process of learning something new could be such a subjective property that keeps a yet unknown natural threshold which can't be lowered (or "reduced") any further. If I were to be overly pessimistic, a hardcore luddite, I'd say that this species is under attack, and there will be a generation that lacks this aspect, but also won't miss it, because this character could have never been experienced in the first place.
[0]: https://en.wikipedia.org/wiki/Names_for_the_human_species#Li...
Speak for yourself. Some of us see the difficulty in sustaining and maintaining this fragile technology stack and have decided to do something about it. I may not be able to do all those things but it is worth learning, since there really is no downside for someone who enjoys learning. I am tackling farming and cpu design at the moment and it is tremendously fun.
Good for you, I guess, but your hobbyist interest in farming is not an argument against using AI. The point of my comment is that the our technology stack is already large enough that adding one more layer is not going to make a difference.
Things like this give us enshitification. When the consumer has no understanding of what they're buying, they have to take corporations at their word that new "features" are actually beneficial, when they're mostly beneficial to the seller.
Kind of like how an ignorant electorate makes for a poor democracy, an ignorant consumer base makes for a poor free market.
Why do people keep parroting this reduction of Socrates' thoughts... I don't think it was just as simple as he thought writing was bad. And we already know that writing isn't everything, anyone who as done any study of a craft can tell you that reading and writing don't teach you the feel of the art form, but can also nonetheless aid in the study. It's not black and white, even though people like to make it out to be.
SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.
PHAEDRUS: You are absolutely right about that, too.
SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?
PHAEDRUS: Which one is that? How do you think it comes about?
SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent.
[link](https://newlearningonline.com/literacies/chapter-1/socrates-...)
Thank you for bringing light to this.
I think it makes a very relevant point to us as well. The value of doing the work yourself is in internalizing and developing one's own cognition. The argument of offloading to the LLM to me sounds link arguing one should bring a forklift to the gym
Yes, it would be much less tiresome and you'd be able to lift orders of magnitude more weights. But is the goal of the gym to more efficiently lift as much weight as possible, or to tire oneself and thus develop muscles?
That's pretty funny, considering LLMs mostly solve his problem with writing. At the very least it's way better than his discourse "solution".
I don't know, most of the things I'm reliant on, from my phone, ISP, automobile, etc are built on fragile interdependent supply chains provided by for-profit companies. If you're really worried about this, you should learn survival skills not the academic topics I'm talking about.
So if you're not bothering to learn how to farm, dress some wild game, etc, chances are this argument won't be convincing for "why should I learn calculus"
For what it's worth, locally runnable language models are becoming exceptionally capable these days, so if you assume you will have some computer to do computing, it seems reasonable to assume that it will enable you to do some language model based things. I have a server with a single GPU running language models that easily blow GPT 3.5 out of the water. At that point, I am offloading reasoning tasks to my computer in the same way that I offload memory take to my computer through my note taking habits.
Although I agree, convincing children to learn using that rationalization won’t work.
Yes it does. Plenty of children accept "you won't always have (tool)" as a reason for learning.
All adults were once children and there are plenty of adults who cannot read beyond a middle school reading level or balance a simple equation. This has been a problem before we ever gave them GPTs. It stands to reason it will only worsen in a future dominated by them.
“You won’t always have a calculator” became moderately false to laughably false as I went from middle to high school. Every task I will ever do for money will be done on a computer.
I’m still garbage at arithmetic, especially mental math, and it really hasn’t inhibited my career in any way.
But I bet you'd know if some calculated number was way too far-off.
I'm no Turing or Ramanujan, but my opinion is that knowing how the operations work, and as example understanding how the area under a curve is calculated, allows you to guesstimate whether numbers are close enough in terms of magnitude to what you are calculating, without needing to be exact in figures.
It is shocking how often I have looked at a spreadsheet, eyeballed the number of rows and the approximate average of numbers in there and figured out there's a problem with a =choose-your-forumula(...) getting the range wrong.
It's pretty annoying in customer service when someone handing you back change has difficulty doing the math. There's been many times doing simple arithmetic in my head has been helpful, including times when my hands were occupied.
I don’t know where you live, but I haven’t used nor carried cash on me for at least 5 years now. Everything either takes card or just tap using your phone/watch. Everything. Parking meters, cashiers, online shopping, filling up your car. I live in a “third world” country too.
Good for you? I use cash half the time. For one thing, I'm not tracked for every single purchase.
I also love (and own) goats and use cash for most purchases.
nobody ever said that. that's ai apologist history revisionism.
Use it or lose it. With the invention of the calculator, students lost the ability to do arithmetic. Now, with LLMs, they lose the ability to think.
This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.
Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.
I wonder whether pre-2022 degrees will become the academic equivalent to low-background radiation steel: https://en.wikipedia.org/wiki/Low-background_steel
"Technology can do X more conveniently than people, so why should children practice X?" has been a point of controversy in education at least since pocket calculators became available.
I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.
My last calculator had a "solve" button and we could bring it in an exam.
You still needed to know what to ask it, and how to interpret the output. This is hard to do without an understanding of how the underlying math works.
The same is true with LLMs. Without the fundamentals, you are outsourcing work that you can't understand and getting an output that you can't verify.
I would add that we don't pretend PE or gyms serve any higher purpose besides individual health and well-being, which is why they are much more game-ified than formal education. If we acknowledge that it doesn't particularly matter how a mind is being used, the structure of school would change fundamentally.
This is the motivation I needed right now
The problem with GPS is that you never learn to orient yourself. You don't learn to have a sense of place, direction or elapsed distance. [0]
As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].
If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?
How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]
[0]: https://www.nature.com/articles/531573a
[1]: https://www.sciencedirect.com/science/article/abs/pii/S00016...
[2]: Example: https://www.nytimes.com/paidpost/netflix/women-inmates-separ...
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc.
I'm the polar opposite. And I'm AI researcher.
The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.
Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.
LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.
This is an insane take.
The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The AI becomes their brain, such that they cannot function without it.
I'd never want to work with someone who is this reliant on technology.
Maybe 40 years ago there were programmers that would not work with anyone that use IDEs or automated memory management. When presented with a programming task that requires these things and you're WITHOUT your IDE or whatever, they will fall apart.
Look, I agree with you, I'm just trying to articulate to someone why they should learn X if they believe an LLM could help them and "an LLM won't always be around" isn't a good argument, because lets be honest, it likely will. This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
> This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"
People who can't do simple addition and multiplication without a calculator (12*30 or 23 + 49) are absolutely at a disadvantage in many circumstances in real life and I don't see how you could think this isn't true. You can't work as a cashier without this skill. You can't play board games. You can't calculate tips or figure out how much you're about to spend at the grocery store. You could pull out your phone and use a calculator in all these situations, but people don't.
You are also likely to be more vulnerable to financial mishaps and scams.
A lot of developers of my generation (30+) learned to program within a code editor and compile their project in command line. Remove the IDE and we can still code.
On the other hand my master 2 students, most of which learned scripting in the previous year, can't even split a project in multiple files after being explained multiple times. Some have more knowledge and ability than others, but a signifiant fraction is just copy-pasting LLM output to solve whatever is asked from them instead of trying to do it themselves, or asking questions.
I think the risk isn't just that LLMs won't exist, but that they will fail at certain tasks that need to get done. Someone who is highly dependent on prompt engineering and doesn't understand any of the underlying concepts is going to have a bad time with problems they can't prompt their way out of.
This is something I see with other tools. Some people get highly dependent on things like advanced IDE features and don't care to learn how they actually work. That works fine most of the time but if they hit a subtle edge case they are dead in the water until someone else bails them out. In a complicated domain there are always edge cases out there waiting to throw a wrench in things.
Do you have the skills and knowledge to survive like a pioneer from 200 years ago?
Technology is rapidly changing humanity. Maybe for the worse.
Knowledge itself is the least concern here. Human society is extremely good at transmitting information. More difficult to transmit are things like critical thinking and problem-solving ability. Developing meta-cognitive processes like the latter are the real utility of education.
Indeed. More people need to grow their own vegetables. AI may undermine our ability for high level abstract thought, but industrial agriculture already represents an existential threat, should it be interrupted for any reason.
Did the pioneers 200 years ago have the skills and knowledge to survive as a serf 400 years ago? Or as a mid-level financial analyst today?
Or is pioneering 200 years ago an in-demand skillset that we should be picking up?
My point is that the necessary skill set required by society is ever-changing. Skills like handwriting, spelling, and reading a map are fading from importance.
I could see a future where pioneering might be useful again.
Do you wear glasses? Or use artificial light?
Or do you have perfect vision and get all your work done during the sunlight hours?
Technology is everywhere, nobody is independent from it.
Do you work with people who can multiply 12.3% * 144,005.23 rapidly without a calculator?
> The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.
The parent poster is positing that for 90% of cases they WILL have their AI assistant because its in their pocket, just like a calculator. It's not insane to think that and its a fair point to ponder.
When in human history has a reasonably educated person been able to do that calculation rapidly without a calculator (or tool to aid them)? I think it's reasonable to draw a distinction between "basic arithmetic" and "calculations of arbitrary difficulty". I can do the first and not the second, and I think that's still been useful for me.
I do agree that it's a fair point to ponder. It does seem like people draw fairly arbitrary lines in the sand around what skills are "essential" or not. Though I can't even entertain the notion that I shouldn't be concerned about my child's ability to spell.
Seems to me that these gains in technology have always come at a cost, and so far the cost has been worth it for the most part. I don't think it's obviously true that LLMs will be (or won't be) "worth it" in the same way. And anyways the tech is not nearly mature enough yet for me to be comfortable relying on it long term.
Yes, it must be more than 17k and less than 18k
Perhaps that mode of thinking is wrong, even if it is accepted.
Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.
I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.
> At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for?
That's actually pretty doable. Almost every resource provides more context than just the exact thing you're asking. You build on that knowledge and continue asking. Nobody knows everything - we've been doing the equivalent of this kind of research forever.
> How can you assess the validity of a source if you don't know the fundamentals?
Learn about the fundamentals until you get to the level you're already familiar with. You're describing an adult outside of school environment learning basically anything.
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."
It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).
I think the biggest issue with LLMs is basically just the fact that we're finally coming to the end of the long tail of human intellectual capability.
With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).
Large language models don't move humans further up the value chain, though. They kick us off of it.
I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.
> coming to the end of the long tail of human intellectual capability
Really? We invent LLMs, continue to improve them, and that's the end of our intellectual capability?
> a Casio, which would never be able to replace a social worker no matter how hard you tried
And LLMs can't replace a social worker no matter how hard you try today.
Why should you learn how to add when you can just use a calculator? We've had calculators for decades!
Because understanding how addition works is instrumental to understanding more advanced math concepts. And being able to perform simple addition quickly, without a calculator is a huge productivity boost for many tasks.
In the world of education and intellectual development it's not about getting the right answer as quickly as possible. It's about mastering simple things so that you can understand complicated things. And often times mastering a simple thing requires you to manually do things which technology could automate.
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
It's been my experience that LLMs are only better than me at stuff I'm bad at. It's noticeably worse than me at things I'm good at. So the answer to your question depends: can your child get good at things while leaning on an LLM?
I don't know the answer to this. Maybe schools need to expect more from their students with LLMs in the picture.
Given the rate of improvement wrt to llms, this may not hold true for long
The rate of improvement with LLMs seems to have halted since Claude3.5, which was about a year ago. I think we’ve probably gone as far as we can go with tweaks to transformer architectures, and we’ll need a new academic discovery which could take years to do better
> Why should I learn to do X if I can just ask an LLM and it will do it better than me
The same way you answer - "Why should I memorise this if I can always just look it up"
Because your perceptual experience is built upon your knowledge and experiences. The entire way you see the universe is altered based on these things, including what you see through your eyes, what you decide is important and what you decide to do.
The goal of life is not always "simply to do as little as possible", or "offload as much work as possible" but lots of the time includes struggling through the fundimentals so that you become a greater version of yourself, it is not the complete task that we desire, it is who you became while you did the work that we desire.
I thought the same as you. But I think not developing those skills will come back and bite you at some point.
For instance your point about: > reading a map to get around (GPS)
https://www.statnews.com/2024/12/16/alzheimers-disease-resea...
After reading the above it dawned on me that the human brain needs to develop spatial awareness and not using that capability of the brain very slowly shuts it off. So I purposefully turn off the gps when I can.
I think not fully developing each of those abilities might have some negative effects that will be hard to diagnose.
"More or less" is doing a lot of work there. School, at least where I am, still spends the first year getting children to memorize the order of the numbers from 1-20 and if there's an even or odd number of a thing on a picture.
Do you google if 5 is less than 6 or do you just memorize that?
If you believe that creativity is not based on a foundation of memorization and experience (which is just memorization) you need to reflect on the connection between those.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
However, I am going to hazard a guess that you still care about your child's ability to do arithmetic, even though calculators make that trivial.
And if I'm right, I think it's for a good reason—learning to perform more basic math operations helps build the foundation for more advanced math, the type which computers can't do trivially.
I think this applies to AI. The AI can do the basic writing for you, but you will eventually hit a wall, and if all you've ever learned is how to type a prompt into ChatGPT, you won't ever get past that wall.
----
Put another way:
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
"Because eventually, you will be able to do X better than any LLM, but it will take practice, and you have to practice now."
>>>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography. <<<
For me it is the second order benefits, notably the idea of "attention to detail" and "a feel for the principles". The principles of each activity being different: writing -> fine motor control, spelling -> word choice/connotation, map -> sense of direction, (my own insert here) money handling -> cost of things
All of them involve "attention to detail" because that's what any activity is - paying attention to it.
But having built up the experience in paying attention to [xyz], you can now be capable when things go wrong.
IE catch disputable transaction on the credit card, or note being told by the shop clerk "No Returns" when their policy says otherwise, un-losting yourself when the phone runs out of battery in the city.
Notably, you don't have to be trained for the details in traditional ways like writing the same sentence 100 times on a piece of paper. Learning can be fun and interesting.
Children can write letters to their friends well before they get their own phone. Geocaching/treasure hunts(hand drawn mud maps!)/orienteering for map use.
As for LLM ... well currently "attention to detail" is vital to spot the (handwave number) 10% of when it goes wrong. In the future LLMs may be better.
But if you want to be better than your peers at any given thing - you will need an edge somewhere outside of using an LLM. Yet still, spelling/word choice/connotations are especially linked to using an LLM currently.
Knowing how to "pay attention to detail" when it counts - counts.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
I don't know. I really feel like the auto-correct features are out to get me. So many times I want to say "in" yet it gets corrected to "on", or vice-versa. I also feel like it does the same to me with they're/their/there. Over the past several iOS/macOS updates, I feel like I've either gotten dumber and no longer do english gooder, or I'm getting tagged by predictive text nonsense.
It will also open you up to getting advertisements shoved up your intestines if you can't spell, or form basic sentences without machine assistance.
Imagine always having Tex autocorrect to Texaco
Universities still teach you calculus and real analysis even though Wolfram Alpha exists. It boils down to your willing to learn something. An LLM can't understand things for you. I'm "early genz" and I write code without llm because I find data structure and algorithm very interesting and I want to learn the concepts not because I'm in love with the syntax of C or Rust (I love the syntax of C btw).
Why have children learn to walk? They're better off learning the newest technology of hoverboards and not getting left behind!
>children will have a different perspective
Children will lack the critical thinking for solving complex problems, and even worse, won't have the work ethic for dealing with the kinds of protracted problems that occur in the real world.
But maybe that's by design. I think the ownership class has decided productivity is more important than societal malaise.
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
It absolutely isn't.
My memory got worse, and google search got bad and now I cant find anything
Spell check isn't really adequate. You get a page full of correctly spelled words, but they're the wrong words.
Try being British, often they're not correctly spelt words at all.
Even if you use a tool to do work, you still have to understand how your work will be checked to see whether it meets expectations.
If the expectation is X, and your tool gives you Y, then you’ve failed - no matter if you could have done X by hand from scratch or not, it doesn’t really matter, because what counts is whether the person checking your work can verify that you’ve produced X. You agreed to deliver X, and you gave them Y instead.
So why should you learn to do X when the LLM can do it for you?
Because unless you know how to do X yourself, how will you be able to verify whether the LLM has truly done X?
Your kid needs to learn to understand what the person grading them is expecting, and deliver something that meets those expectations.
That sounds like so much bullshit when you’re a kid, but I wish I had understood it when I was younger.
Let your children watch the movie Idiocracy - it’s more eloquent than you’ll ever be in answering that question.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
What I don't like are all the hidden variables in these systems. Even GPS, for example, is making some assumptions about what kind of roads you want to take and how to weigh different paths. LLMs are worse in this regard because the creators encode a set of moral and stylistic assumptions/dictates into the model and everybody who uses it is nudged into that paradigm. This is destructive to any kind of original thought, especially in an environment where there are only a handful of large companies providing the models everyone uses.
Your child perhaps shouldn't learn things that computers can do. But they should learn something to make themselves more useful than every uneducated person. I'm not sure schools are doing much good anymore teaching redundant skills. Without any abilities beyond the default, they'll grow up to be poor. I don't know what that useful education is but I expect something sort of thinking skills, and perhaps even giant piles of knowledge to apply that thinking to.
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
1. You won’t always have an LLM. It’s the same reason I still have at least my wife’s phone number memorized.
2. So you can learn to do it better. See point 1.
I wasn’t allowed to use calculators in first and second grade when memorizing multiplication tables, even though a calculator could have finished the exercise faster than me. But I use that knowledge to this day, every day, and often I don’t have a calculator (my phone) handy.
It’s what I tell my kids.
> there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
It's not true even though it's accepted. Rote memorization has a place in an education. It does strengthen learning and allow one to make connections between the things seen presently and things remembered, among other things.
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc
That sounds like setting-up your child for failure, to put it bluntly.
How do you want to express a thought clearly if you already fail at the stage of thinking about words clearly?
You start with a fuzzy understanding of words, which you delegated to a spellchecker, added to a fuzzy understanding of writing, which you've delegated to a computer, combined with a fuzzy memory, which you've delegated to a search engine, and you expect that not to impact your child's ability to create articulate thoughts and navigate them clearly?
To add irony to the situation, the physical navigation skills have, themselves, been delegated to a GPS..
Brains are like muscles, they atrophy when not used.
Reverse that course before it's too late, or suffer (and have someone else suffer) the consequences.
I agree with your point, but I just want to say that understanding a word and knowing how to spell it are orthogonal.
This is a good point, and part of the unwritten rationale of the argument I was trying to make.
At first glance, knowing how to spell a word and understanding a word should be perfectly orthogonal. How could it not be? Saying that it is not so would imply that civilizations without writing would have no thought or could not communicate through words, which is preposterous.
And yet, once we start delegating our thinking, our spelling and our writing to external black boxes, our grasp on those words and our grasp of those words becomes weaker. To the point that knowing how to spell a word might become a much bigger part, relatively, of our encounter with those words, as we are doing less conceptual thinking about those words and their meaning.
And therefore, I argue that, in a not too far-fetched extremum, understanding a word and knowing how to spell a word might not be fully orthogonal.
Well, I wouldn't say they're completely orthogonal, knowing how a word is spelled can sometimes give insight into the meaning of the word. I think they're mostly orthogonal though; it's fairly common for people to know what a word means without knowing how to spell it, and on the flip side there are people, like Scrabble players, who know how to spell a lot of words which they don't really know the meaning of. I've heard of one guy who is a champion French Scrabble player who can't actually understand French.
you will benefit from the beauty of appreciation, lad, just hang on a little bit longer. It is beautifully explained in this essay https://www.astralcodexten.com/p/the-colors-of-her-coat
> That's more or less accepted today.
Bullshit! You cannot do second order reasoning with a set of facts or concepts that you have to look up first.
Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.
People will think you are a wizard if you read documentation and bother to remember it, because they're still busy asking Google or ChatGPT while you're happily coding without pausing
I am 100% certain people said the same thing about arithmetic and calculators and now mental arithmetic skill is nothing more than a curiosity.
That's simply not true. I use mental arithmetic skill every day. It's irritating or funny when you come across someone who struggles with it, depending on the situation.
Obviously basic levels are needed but the ability to say multiply 4 digit numbers in your head is totally superfluous. Theres a parallel to software engineering there.
Being able to do basic math in your head is valuable just in terms of basic practicality (quickly calculating a tip or splitting a bill, doubling a recipe, reasoning about a budget...), but this is a poor analogy anyway because 3x2 is still 3x2 regardless of how you get there whereas creative work produced by software is worthless.
I encourage you to reconsider.
Mental math is essential for having strong numerical fluency, for estimation, and for reasoning about many systems. Those skills are incredibly useful for thinking critically about the world.
To a certain point. Basic arithmetic is important but the ability to calculate large square roots or multiply multi-digit numbers is not very relevant when you can trivially calculate them on your phone in seconds.
> Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.
Not to mention discernment and info literacy when you do need to go to the web to search for things. AI content slop has put everybody who built these skills on the back foot again, of course.
>Why should I learn to do X if I can just ask an LLM and it will do it better than me
This may eventually apply to all human labor.
I was thinking, even if they pass laws to mandate companies employ a certain fraction of human workers... it'll be like it already is now: they just let AI do most of the work anyway!
It’s all about critical thinking. The answer to your kid is that LLMs are a tool and until they run the entire economy there will still need to be people with critical thinking skills making decisions. Not every task at school helps hone critical thinking but many of them do.
>So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"
Realistically it comes down to the idea that being an educated individual that knows how to think is important for being successful, and learning in school is the only way we know to optimize for that, even if it's likely not the most efficient way to do so.
The scope of what’s useful to know changes with tools, but having a bullshit detector requires actually knowing some things and being able to reason about the basics.
It’s not that LLM’s are particularly different it’s that people are less able to determine when they are messing up. A search engine fails and you notice, an LLM fails and your boss, customer, ect notices.
I don't think memorizing poetry fits your picture. Nobody ever memorized poetry so that they could answer questions about it.
A large part was to preserve cultural knowledge, which is kind of like answering questions about it. What wisdom or knowledge does this entail. People do the same with religious texts today
The other part I imagine was largely entertainment, social and memory is a good skill to build.
It doesn’t seem that different from having to write a book report or something like that. Back in school, we also needed to memorize poems and songs to recite them - I quite hated it because my memory was never exactly great. Same as having to remember the vocabulary in a foreign language when learning it, though that might arguably be a bit more directly useful.
For the same reason you should learn how to walk in a world that has utility scooters.
>When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.
Au contraire! It is quite wrong and was wrong then too. "Rote memorisation" is a slur for knowledge. Knowledge is still important.
Knowledge is the basis for skill. You can't have skill or understanding without knowledge because knowledge is illustrative (it gives examples) and provides context. You can know abstract facts like "addition is abelian" but that is meaningless if you can't add. You can't actually program if you don't know the building blocks of code. You can't write a C program if you have to look up the function signature of read(2) and write(3) every time you need to use them.
You don't always have access to Google, and its results have declined procipitously in quality in recent years. Someone relying on Google as their knowledge base will be kicking themselves today, I would claim.
It is a bit like saying you don't need to learn how to do arithmetic because of calculators. It misses that learning how to do arithmetic isn't just important for the sake of being able to do it, but for the sake of building a comfort with numbers, building numerical intuition, building a feeling for maths. And it will always be faster to simply know that 6x7 is 42 than to have to look it up. You use those basic arithmetical tasks 100 times every time you rearrange an equation. You have to be able to do them immediately. It is analogous.
Note that I have used illustrative examples. These are useful. Knowledge is more than knowing abstract facts like "knowledge is more than knowing abstract facts". It is about knowing concrete things too, which highlight the boundaries of those abstract facts and illustrate their cores. There is a reason law students learn specific cases and their facts and not just collections of contextless abstract principles of law.
>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers),
Writing legibly is important for many reasons. Note taking is important and often isn't and can't be done with a computer. It is also part of developing fine motor skills generally.
>spell very well (spell check keeps us professional),
Spell checking can't help with confusables like to/two/too, affect/effect, etc. and getting those wrong is much more embarrassing than writing "embarasing" or "parralel". Learning spelling is also crucial because spelling is an insight into etymology which is the basis of language.
>reading a map to get around (GPS), etc
Reliance on GPS means never building a proper spatial understanding. Many people that rely on GPS (or being driven around by others) never actually learn where anything is. They get lost as soon as they don't have a phone.
>but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection).
Memorising poetry is a different sort of thing--it is a value judgment not a matter of practicality--but it is valuable in itself. We have robbed generations of children of their heritage by not requiring them to learn their culture.
This is how we end up with people who cant write legibly, cant smell bad maths (on the news/articles/ads), cant change tires, have no orienteering or sense of direction and memories like swiss cheese. Trust the oracle son. /s
I think all of the above do one thing brilliantly, built self confidence.
Its easy to get bullshitted if what youre able to hold in your head is effectively nothing.
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change. Homework being made easy by AI is a symptom of the real issues. Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years. Lecturers who refuse to upload lecture recordings or slides. Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.
I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments. If anything AI will lead to bigger differences in student learning. Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.
Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.
And yet all the people who created all the advances in AI have extremely traditional, extremely good, fancy educations, and did absolutely bonkers amount of homework. The thing you are talking about is very aspirational.
There's some sad irony to that, making homework easier for future generations but those generations being worse off as a result on average. The lack of AI assistance was a forcing function to greater depth.
Outliers will still work hard and become even more valuable, AI won't affect them negatively. I feel non outliers will be affected negatively on average in ability to learn/think.
With no confirming data, I feel those who got that fancy education would do so in any other institution. Just those fancy institutions draw in and filter for intelligent types, not teach them to be intelligent as it's practically a pre-requisite.
I don't see a future that doesn't involve some form of AR glasses and individual tuned learning. Forget teachers, you will just don your learning glasses and have an AI that walks you through assignments and learning everyday.
That is if learning-to-become-a-contributing-member-of-society doesn't become obsolete anyway.
> it's called the "Flipped classroom" approach.
Flipped classroom is just having the students give lectures, instead of the teacher.
> Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable.
This is called "proctored exams" and it's been pretty common in universities for a few centuries.
None of this addresses the real issue, which is whether teachers should be preventing students from using AIs.
> Flipped classroom is just having the students give lectures, instead of the teacher.
Not quite. Flipped classroom means more instruction outside of class time and less homework.
> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue
Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.
And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.
https://en.wikipedia.org/wiki/Flipped_classroom
Flipped classroom sounds horrible to me. I never liked being given time to work on essays or big projects in class. I prefer working at home, where the environment is much more comfortable and I can use equipment the school doesn't have, where I can wait until I'm in the right mood to focus, where nobody is pestering me about the intermediary stages of my work, etc.
It also seems like a waste of having an expert around to be doing something you could do at home without them.
Exams should increasingly be written with the idea in mind that students can and will use AI. Open book exams are great. They're just harder to write.
I should add that upon reflection, I did have some really good "flipped classroom" experiences in college, especially in highly technical math and philosophy courses. But in those cases (a) homework was really vital, (b) significant work was never done in class, and (c) we never watched lectures at home. Instead, the activity at home (which did replace lectures) was reading textbooks (or papers) and doing homework. Then class time was like collective office hours.
Failure to do the homework made class time useless, the material was difficult, and the instructors were willing to give out failing grades. So doing the homework was vital even when it wasn't graded. Perhaps that can also work well here in the context of AI, at least for some subjects.
Flipped classroom means you watch the recorded lecture outside of class time and you do your homework during class time.
Thank you, it's amazing how people don't even try to understand what words mean before dismissing it. Flipped makes way more sense anyway since lectures aren't terribly interactive. Being able to pause/replay/skip around in lectures is underrated.
Except that students don't watch the videos. We have so much log data on this - most of them don't bother to actually watch the videos. They intend to, they think they will, but they don't.
As a university student currently taking a graduate course with a "flipped classroom" curriculum, I can confirm that many students in the class aren't watching the posted videos.
I myself am one of them, but I attribute that to the fact that this is a graduate version of an undergrad class I took two years ago (but have to take the grad version for degree requirements). Instead, I've been skimming the posted exercises and assessing myself which specific topics I need to brush up on.
If they can perform well without reviewing the material, that's a problem with either the performance measure or the material.
And not watching lectures is not the same as not reviewing the material. I generally prefer textbooks and working through proofs or practice problems by hand. If I listen to someone describe something technical I zone out too quickly. The only exception seems to be if I'm able to work ahead enough that the lecture feels like review. Then I'm able to engage.
>Not fully of course, they edit the output using their expertise
Surely this is sarcasm, but really your average schoolteacher is now a C student Education Major.
I was talking about people I know and talk with, mostly friends and family, who are smart, hard working, and their students are lucky to have them.
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
barely related to your point but “I can align and maximize ANY laser” is such an incredibly specific flex, I love it
Especially because it's not a skill everyone gets just because they practice. I know because I tried for years lol.
A master blacksmith can shoe a horse an' all. Laser alignment is also a solved problem with a machine. Just because something can be done by hand does not mean it has any intrinsic value.
> But that struggling was ultimately necessary to really learn the concepts.
This is what isn't explained or understood properly (...I think) to students; on the surface you go to college/uni to learn a subject, but in reality, you "learn to learn". The output that you're asked to submit is just to prove that you can and have learned.
But you don't learn to learn by using AI tools. You may learn how to craft stuff that passes muster, gets you a decent grade and eventually a piece of paper, but you haven't learned to learn.
Of course, that isn't anything new, loads of people try and game the system, or just "do the work, get the paper". A box ticking exercise instead of something they actually want to learn.
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.
So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.
Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.
It's like taking the college weed out classes and shifting those to people in the middle of their career.
Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.
Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past
Agreed, the struggle often leads us to poke and prod an issue from many angles until things finally click. It lets us think critically. In that journey you might've learned other related concepts which further solidifies your understanding.
But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.
Now does everything need an "a-ha" moment? No.
However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.
Yep. People love to cut down this argument by saying that a few decades ago, people said the same thing about calculators. But that was a problem too! People losing a large portion of their mental math faculty is definitely a problem. If mental math was required daily, we wouldn't see such obvious BS numbers in every kind of reporting(media/corporate/tech benchmarks) that people don't bat an eye at. How much the problem is _worth_ though, is what matters for adoption of these kinds of tech. Clearly, the problem above wasn't worth much. We now have to wait and see how much the "did not learn through cuts and scratches" problem is worth.
Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.
Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.
> I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.
In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.
But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.
It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.
I don't wholly disagree with this post, but I'd like to add a caveat, observing my own workflow with these tools.
I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).
For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."
Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.
This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.
Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.
I've been calling this out since the rise of ChatGPT:
"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."
I've had multiple situations where AI has helped me get to the solution because it has been unable to get there itself. But that I wouldn't have realised that solution otherwise. In one case, looking for a plot, it delivered many woeful options but one sparked an alternative thought that got me on track. In other cases trying to debug code, having it talk through the logic/flow and exhaust other fixes, I have managed to solve the problem despite not being experienced at all with that language.
The dangers I've found personally are more around how it eases busywork, so I'm more inclined to be distracted doing that as though it delivers actual progress.
Somewhat agree.
I agree in principal - the process of problem solving is the important part.
However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.
If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.
You can offload the simpler things, but struggling with the simpler things is how you build the skills to handle the more complex ones that you can't hand off.
If the simpler thing in question is a task you've already mastered, then you're not losing much by asking an LLM to help you with it. If it's not trivial to you though, then you're missing an opportunity to learn.
Couldn't have said it better myself.
The biology of the human brain will not change as a result of these LLMs. We are imperfect and will tend to take the easiest route in most cases. Having an "all powerful" tool that can offload the important work of figuring out tough problems seems like it will lead to a society less capable in solving complex problems.
If you haven't mastered it yet then its not a simple thing.
Grandma will not be able to implement a simple add function using python by asking chat gpt and copy pasting.
The counter argument is that now you can skip boilerplate code and focus on the overall design and the few points that brainpower is really needed.
The amount of visualizations that i have made after chat gpt was released has increased exponentially. I loath looking the documentation again and again to make a slightly non standard graph. Now all of the friction is gone! Graphs and visuals are everywhere in my code!
> focus on [...] the few points that brainpower is really needed
The person you're responding to is talking about it from an educational perspective though. If your fundamentals aren't solid, you won't know that exponentially smoothed reservoir sampling backed by a splay tree is optimal for your problem, and ChatGPT has no clue either. Trying things, struggling, and failing is crucial to efficient learning.
Not to mention, you need enough brain power or expertise to know when it's bullshitting you. Just today it was telling me that a packed array was better than my proposed solution, confidently explaining why, and not once saying anything correct. No prompt changes could fix it (whether restarting or replying), and anyone who tried to use less brainpower there would be up a creek when their solution sucked.
Mind you, I use LLMs a lot, including for code-adjacent tasks and occasionally for code itself. It's a neat tool. It has its place though, and it must be used correctly.
I think it’s finally time to just stop the homework.
All school work must be done within the walls of the school.
What are we teaching our children? It’s ok to do more work at home?
There are countries that have no homework and they do just fine.
Homework helps reinforce the material learned in class. It's already a problem where there is too much material to be fit into a single class period. Trying to cram in enough time for homework will only make that problem worse.
Can do the work the next day to reinforce.
As I said there are countries without homework and they seem to do ok. So it’s not mandatory by any means.
>Can do the work the next day to reinforce.
Keeping the curriculum fixed, there's already barely enough time to cover everything. Cutting the amount of lectures in half to make room for in-class homework time does not fix this fundamental problem.
Just make lecture times longer.
students already don't pay attention in lecture:
* due to either learning/concentration issues * the fact that most lecturers are boring, dull, and unengaging * and oftentimes you can learn better from other sources
making lecture longer doesn't fix a single one of these issues. it just makes students learn even less.
There are such legal, cultural and economic differences between countries that no homework might work in one country but not at all in another.
Yeah, the concept of "productive struggle" is important to the education process and having a way to short circuit it seems like it leads to worse learning outcomes.
I am not sure all humans work the same way though. Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.
I felt that during my time in university. I absolutely loved reading and working through dense math text books but the moment there was a time constraint the struggle turned into chaos.
> Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.
I sympathize, but it's impossible to remove all struggle from life. It's better in the long run to work through this than try to avoid it.
I think teachers also need to reconsider how they are measuring mastery in the subject. LLMs exist. There is no putting the cat back into the bag. If your 1980s way to measure a student's mastery of a subject can be fooled by an LLM, then how effective is that measurement in 2020+? Maybe we need to stop using essays as a way to tell if the student has learned the material.
Don't ask me what the solution is. Maybe your product does it. If I knew, I'd be making a fortune selling it to universities.
I don't think asking "what's wrong with my code" hurts the learning process. In fact, I would argue it helps it. I don't think you learn when you have reached your frustration point and you just want the dang assignment completed. But before reaching that point, if you had a tutor or assistant that you could ask, "hey, I'm just not seeing my mistake, do you have ideas" goes a long way to foster learning. ChatGPT, used in this way, can be extremely valuable and can definitely unlock learning in new ways which we probably even haven't seen yet.
That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.
No I think the struggle is essential. If you can just ask a tutor (real or electronic) what is wrong with your code, you stop thinking and become dependent on that. Learning to think your way through a roadblock that seems like a showstopper is huge.
It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.
If I were to go and try to bench 300lbs, I would absolutely need a spotter to rescue me. Taking on more weight than I can possibly achieve is a setup for failure.
Sure, I should probably practice benching 150lbs. That would be a good challenge for me and I would benefit from that experience. But 300lbs would crush me.
Sadly, ChatGPT is like a spotter that takes over at the smallest hint of struggle. Yes, you are not going to get crushed, but you won't get any workout done either.
You really want start with a smaller weight, and increment it in steps as you progress. You know, like a class or something. And when you do those exercises, you really want to be lifting those weights yourself, and not rely on spotter for every rep.
We're stretching the metaphor here. I know, kind of obnoxious.
If I have accidentally lifted too much weight, I want a spotter that can immediately give me relief. But yes, you're right. If I am always getting a spot, then I'm not really lifting my own weight and indeed not making any gains.
I think the question was, "I'm stuck on this code, and I don't see an obvious answer." Now the lazy student is going to ask for help prematurely. But that doesn't preclude ChatGPT's use to only the lazy.
If I'm stuck and I'm asking for insight, I think it's brilliant that ChatGPT can act as a spotter and give some immediate relief. No different than asking for a tutor. Yes maybe ChatGPT gives away the whole answer when all you needed is a hint. That's the difference between pure human intelligence and just the glorified search engine that is AI.
And quite probably, this could be a really awesome way in which AI learning models could evolve in the context of education. Maybe ChatGPT doesn't give you the whole answer, instead it can just give you the hint you need to consider moving forward.
Microsoft put out a demo/video of a grad student using Copilot in very much this way. Basically the student was asking questions and Copilot was giving answers that were in the frame of "did you think about this approach?" or "consider that there are other possibilities", etc. Granted, mostly a marketing vibe from MSFT, but this really demonstrates a vision for using LLMs as a means for true learning, not just spoiling the answer.
Sure, this is possible. Also Chegg is an "innovative learning tool", not a way to cheat.
I agree that it's not that different than asking a tutor though, assuming it's a personal tutor whom are you paying so they won't ever refuse to answer. I've never had access to someone like that, but I can totally believe that if I did, I would graduate without learning much.
Back to ChatGPT: during my college times I've had plenty of times when I was really struggling, I remember feeling extremely frustrated when my projects would not work, and spending long hours in the labs. I was able to solve this myself, without any outside help, be it tutors or AI - and I think this was the most important part of my education, probably at least as important as all the lectures I went to. As they say, "no pain, no gain".
That said, our discussion is kinda useless - it's not like we can convince college students to stop using AI. The bad colleges will pass everyone (this already happens), the good colleges will adapt (probably by assigning less weight to homework and more weight to in-class exams). Students will have another reason to fail the class: in additional to classic "I spend whole semester partying/playing computer games instead of studying", they would also say "I never opened books and had ChatGPT do all assignments for me, why am I failing tests?"
Students do something akin to vibe coding I guess. It may seem impressive at first glance but if anything breaks you are so, so lost. Maybe that’s it, break the student’s code, see how they fix it. The vibe coding student is easily separate from the real one (of course this real coder can also use AI, just not yoloing it).
I guess you can apply similar mechanics to reports. Some deeper questions and you will know if the report was self written or if an AI did it.
>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".
Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.
It doesn't matter who coherent the output is - the students will paste it anyway, then fail the assignment (and you need to deal with grading it) and then complain to parents and school board that you're incompetent because you're failing the majority of the class.
Your post is based in a misguided idea that students actually care about some basic quality of their work.
>> Does that actually work?
Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".
Working code in a few seconds.
I'm very glad I didn't have access to anything like that when I was doing my CS degree.
Yeah, and forget about giving skeleton code to students they should fill in; using an AI can quite frequently completely ace a typical undergraduate level assignment. I actually feel bad for people teaching programming courses, as the only real assessment one can now do is in-class testing without computers, but that is a strange way to test students’ ability to write and develop code to solve certain classes of problems…
Why do the in-class testing without computers?
We use an airgapped lab (it has LAN and a local git server for submissions, no WAN) to give coding assessments. It works.
At my college, we did in-class testing with psuedocode, because we were being tested on concepts, not specific programming languages or syntax.
Hopefully someone is thinking about adapting the assessments. Asking questions that focus on a big picture understanding instead of details on those in-class tests.
I have some subjects, at Masters - that are solvable by one prompt. One.
Quality of CS/Software Engineering programs vary that much.
Yeah. On the other hand, "implement boruvkas MST algorithm in cuda such that only the while(numcomponents > 1) loop runs on the CPU, and everything else runs in the gpu. Memcpy everything onto the gpu first and only transfer back the count each iteration/keep it in pinned memory"
It never gets it right, even after many reattempts in cursor. And even if it gets it right, it doesn't do the parallelization effectively enough - it's a hard problem to parallelize.
Why are you asking? Go try it. And yes, depending on the task, it does.
As I said, I'm not a student, so I don't have access to a homework assignment to paste in. Ironically I have pretty much everything I ever submitted for my undergrad, but it seems like I absolutely never archived the assignments for some reason.
I was able to get ~80% one shots on Advent of Code with 4o up to about day 12 iirc.
since late 2024/early 2025 it now is the case, especially with a reasoning model like Sonnet 3.7, DeepSeek-r1, o3, Gemini 2.5, etc., and especially if you upload the textbook, slides, etc alongside the homework to be cheated on.
most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.
for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.
but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.
I spent much of the past year at public libraries, and I heard the word ChatGPT approximately once per minute, in surround sound. Always from young people, and usually in a hushed tone...
In one way I'm glad I learned to code before LLM:s. It would be so hard to push through the learning now when you are just a click away from buildning the app with AI...
>I built a popular product that helps teachers with this problem.
Does your product help teachers detect cheating? Because I hear none of them are accurate, with many false positives and ruined academic careers.
Are you saying yours is better?
I’m pretty sure you can assume close to 100% of students are using LLMs to do their homework.
And if you're that one person out of 100,000 who is not using LLMs to do their homework, you are at a significant disadvantage on the grading curve.
My university solves this quite easily.
There is no graded homework, the coursework is there only as a guide and practice for the exams.
So you can absolutely use LLMs to help you with the exercises or to help understand something, however if you blindly get answers you will only be fooling yourself as you won't be able to pass the exams.
That’s how most schooling has already been in a lot of South and East Asia. If you don’t do your homework, you get punished in other ways, but it doesn’t have any impact on the overall grade, the grade solely depends on the final exam.
Maybe, but piss on that, who needs good grades? Youll learn a hell of a lot better
Currently in university and my experience is that it heavily depends on the module. For a lot your statement's probably accurate, however for others it really isn't. For example we have a microprocessors module which is programming for an rp2040 in c, but also manually setting up interrupt handlers etc in assembly. All of the LLM's are completely useless for it, they tell you that the rp2040 works in ways it just doesn't and are actively unhelpful with the misinformation. The only students who can do well in that module are the ones that understand the material well and go to the datasheet and documentation instead of an LLM.
I'm more interested in memory and knowledge retention in general and how AI can assist. How many times have you heard from people that they are doing rote memorization and will "data dump" test information once a course is over. These tools are less to blame than the motivators and systems that are suppose to be engaging students in real learning and the benefits of a struggle.
Another problem is there is so much in technology, I just can't remember everything after years of exposure to so many spaces. Not being able to recall information you used to know is frustrating and having AI to remind you of details is very useful. I see it as an amplifying tool, not a replacement for knowledge. I'm sure there are some prolific note taking memory tricksters out there but I'm not one of them.
I frequently forget information over time and it's nice to have a tool to remind me of how UDP, RTP, and SIP routing work when I haven't been in the comm or network space for a while.
My CS undergrad school used to let students look up documentation during coding exams. Most courses had a 3-5 hour coding challenge where you had to make substantial changes to a course project you had developed. I think this could also be the right response to LLMs. Let students use whatever they want to use, and test true skills and understanding.
FWIW, exams testing rote learning without the ability to look up things would have been much easier. It was really stressful to sit down and make major changes to your project to satisfy new unit tests, which often targeted edge cases and big O complexity to crash your code.
That is a great idea!
Most students would find getting their hands dirty in this way more valuable than reading about something from start to end.
Yes, it led to well-rounded learning. But we had too many courses and, overall, I think it was too much. All CS courses had a theoretical exam, some project-based learning, and some coding exam to prevent cheating in the project-based learning part.
I don't get this reasoning. Without LLMs I would learn how to write sub-optimal code that is somewhat functional. With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster. On top of that it always makes dumb mistakes which forces you to actually understand what it's spitting out to get it to work properly. Again: that helps with learning.
The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.
> With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster.
No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.
So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.
This makes for a good sound bite but it's just not true. The use case of "show me what is a customary solution to <problem>" plays exactly into LLMs strength as a funny kind of search engine. I used to (and still do) search public code for this use case to get a sense of the style and idioms common in a new language/library and the plausible set of tokens is doing exactly that.
> So you learn something, [...] You internalize it.
Or they don't.
It’s more like looking up the solution to the math problem you’re supposed to solve on your own. It can be helpful in some situations, but in general you don’t learn the problem-solving skills if you don’t do it yourself.
Exactly. For vast majority of students myself included just looking at ready solution is actually very poor way to study. And LLMs are exactly this. Ready solution generators. Doing with things like math and programming is learning.
And same goes for art. You do not become master in art by looking at art or even someone drawing...
I would recommend programming, and designing your system, on a piece of paper instead.
It's the most efficient few-shot which beats the odds on any SotA model.
> I think the issue is that it's so tempting to lean on AI.
This is not the root cause, it's a side effect.
Student's cheat because of anxiety. Anxiety is driven by grades, because grades affect failure. To detect cheating is solving the wrong problem. If most of the grades did not directly affect failure, student's wouldn't be pressured to cheat. Evaluation and grades have two purposes:
1. Determine grade of qualification i.e result of education (sometimes called "summative")
2. Identify weaknesses to aid in and optimise learning (sometimes called "formative")
The problem arises when these two are conflated, either by combining them and littering them throughout a course, or when there is an imbalance in the ratio between them i.e too much of #1. Then the pressure to cheat arises, the measure becomes the target, and focus on learning is compromised. This is not a new problem, student's already waste time trying to undermine grades through suboptimal learning activities like "cramming".
The funny thing is that everyone already knows how to solve cheating: controlled examination, which is practical to implement for #1, so long as you don't have a disruptive number of exams filling that purpose. This is even done in sci-fi, Spok takes a "memory test" in 2286 on Vulkan as a kind of "final exam" in a controlled environment with challenges from computers - it's still using a combination of proxy knowledge based questions and puzzles, but it doesn't matter, it's a controlled environment.
What's needed is a separation and balance between summative and formative grading, then preventing cheating is almost easy, and student's can focus on learning... cheating at tests throughout the course would actually have a negative affect on their final grade, because they would be undermining their own learning by breaking their own REPL.
LLMs have only increased the pressure, and this may end up being a positive thing for education.
>I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts.
This is entirely your opinion. We don't know how the brain learns nor do we know if intelligence can be "taught"
I think this is a structural issue. Universities right now are trying to justify their existence - universities of the past used to be sites of innovation.
Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!
Incorrect. Fundamentals must be taught in order to provide the context for the more challenging open-ended activities. Memorization is the base of knowledge, a starting point. Cheating (whether through an LLM or hiring someone or whatever) skips the journey. You can't just take them through the exciting routes, sometimes they have to go through the boring tedious repetitive stuff because that's how human brains learn. Learning is, literally, a stressful process on the brain. Students try to avoid it, but that's not good for them. At least in the introductory core classes.
I guess I should have phrased it differently - what I meant was just stop testing the tedious stuff, make it clear to students that learning the fundamentals is expected. Then examine them on hard exploratory problems which require the fundamentals.
> Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does.
LLMs can't produce intellectual rigour. They get fine details wrong every time. So indeed using ChatGPT for doing your reasoning for you produces inferior results. By normalising non-rigorous yet correct sounding answers, we drive down the expectations.
To take a concrete example, if you tell a student to implement memcpy with chatgpt, and it will just give an answer which uses uint64 copying. The student has not thought from first principles (copy byte by byte? Improve performance? How to handle alignment?). This lack of insight in return to immediate gratification will bite later.
It's maybe not problem for non-STEM fields where this kind of rigor and insight is not required to excel. But in STEM fields, we write programs and prove theorems for insight. And that insight and the process of obtaining it is gone with AI.
You claim using AI tools doesn't dumb you down, but it very well could and is. Take the calculator for example, I'm overly dependent on it. I'm slower to perform arithmetic than I would have been without it. But knowing how to use one allows me to do more complex math more quickly. So I'm "dumber" in one way and "smarter" in others. AI could be the same... except our education system doesn't seem ready for it. We still learn arithmetic, even if we later rely in tools to do it. Right now teachers don't know how to teach so that AI doesn't trivialize things.
You need to know how to do things so you know when the AI is lying to you.
I agree that you should learn the fundamentals before taking shortcuts. I just don't view it as the universities' job to repeatedly remind their students of this, that's elementary/high school style. In universitiy, just give them hard problems requiring fundamental knowledge and cross-checking capabilities but don't restrict their tools.
I TA'd for the fundamentals of computer science I in college. In addition to being a great class for freshman, teaching it every year really did help keep me sharp.
High schools are a long way off from that level of education. I took AP CS in highschool and it was a joke by comparison. Of course YMMV. The best highschool CS course might be better than the worst university level offerings. We would always have know it all students who learned Java in high school. They either appreciated the new perspective on the fundamentals and did well, or they blew off the class and failed when it got harder.
We could keep the same teaching offerings, my main gripe is with the assignments/examinations. It just feels wrong to complain about students using AI while at the same time continuing to hand out tasks that are trivial to solve using AI.
I also worked for the faculty for the better part of my university studies, and I know that ultimately changing the status quo is most likely impractical. There are not enough resources to continuously grade open-ended assignments for so many students and they probably need the pedagogical pressure to learn fundamentals. Still makes me a bit bitter from time to time.
Agreed, the only thing that is certain is that they are cheating themselves.
While it can be useful to use LLMs as a tutor if you're stuck. The moment that you use it to provide a solution, you stop learning and the tool becomes a required stepping stone.
here is an idea, curious what others think of this:
split the entire coursework into two parts:
part 1 - students are prohibited from using AI. Have the exams be on physical papers than digital ones requiring use of laptop/computer. I know this adds burden on corrections and evaluations of these answers, but I think this provides a raw answer to someone's understanding of concepts being taught in the class.
part2 - students are allowed, and even encouraged to use LLMs. And they are evaluated based on the overall quality of the answer, keeping in mind that a non zero portion of this was generated using an LLM. Here the credit should be given to the factual correctness of the answer (and if the student is capable of verifying the LLM output).
Have the final grade be some form of weighted average of a student's scores in these 2 parts.
note: This is a raw thought that just occurred to me while reading this thread, and I have not had the chance to ruminate on it.
I once had an algorithms professor who would give us written home assignments and then on the day of submission take a quiz with identical questions. A significant portion of the class did poorly on these quizes despite scoring good on the assignment.
I can't even imagine how learning is impacted by the (ab)use of AI.
This is frequently stated, but is there any evidence that the "epiphany" is actually required for learning?
> “how much are students using AI to cheat?” That’s hard to answer
"It is difficult to get a man to understand something, when his salary depends on his not understanding it!"
Students who do that risk submitting assignments that show they don’t understand the course so far.
It’s not a wide spread “problem”. It’s just education lagging behind technology.
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.
Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).
P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.
In the article, I guess this would be buried in
> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.
"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.
(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
From what I could observe, the latter is endemic amongst high school students. And don't kid yourself. For many it is just a step up from copy/pasting the first Google result.
They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.
THe question is would those students have done any better or worse if there hadn't been LLM for them to "copy" off?
In other words, is the school certificaftion meant to distinguish those who genuinely learnt, or was it merely meant to signal (and thus, those who used to copy pre-llm are going to do the same, and thus reach the same level of certification regardless of whether they learnt or not)?
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.
I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).
Indeed. I called out the second-top category, but you could look at the top category as well:
> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.
Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.
And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.
I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."
> feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them
You're right.
Quite incredibly, they also do the opposite, in that they hype-up / inflate the capability of their LLMs. For instance, they've categorised "summarisation" as "high-order thinking" ("Create", per Bloom's Taxonomy). It patently isn't. Comical they'd not only think so, but also publicly blog about it.
> Bloom's taxonomy is a framework for categorizing educational goals, developed by a committee of educators chaired by Benjamin Bloom in 1956. ... In 2001, this taxonomy was revised, renaming and reordering the levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. This domain focuses on intellectual skills and the development of critical thinking and problem-solving abilities. - Wikipedia
This context is important: this taxonomy did not emerge from artificial intelligence nor cognitive science. So its levels are unlikely to map to how ML/AI people assess the difficulty of various categories of tasks.
Generative models are, by design, fast (and often pretty good) at generation (creation), but this isn't the same standard that Bloom had in mind with his "creation" category. Bloom's taxonomy might be better described as a hierarchy: proper creation draws upon all the layers below it: understanding, application, analysis, and evaluation.
Here is one key take-away, phrased as a question: when a student uses an LLM for "creation", are underlying aspects (understanding, application, analysis, and evaluation) part of the learning process?
> Students primarily use AI systems for creating (using information to learn something new)
this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say
> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.
and later they report
> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection
kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.
The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.
You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.
I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.
A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!
How does your product prevent a person from simply retyping something that ChatGPT wrote?
I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.
It may be possible that with enough data from the two categories (copied from ChatGPT and not), your keystroke dynamics will differ. This is an open question that my co-founder and I are running experiments on currently.
So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).
Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.
We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.
At this point, it would be easier to stick to in-person assignments.
It certainly would be! I think for many students though, there's something lost there. I was a student who got a lot more value out of my take-home work than I did out of my in-class work. I don't think that I ever would have taken the interest in writing that I did if it wasn't such a solitary, meditative thing for me.
>I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable
It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.
> The writing is irrelevant.
In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.
Maybe the way universities do it is not great, but writing in itself is important.
Kindly read past the first line, friend :)
I did. :)
(And I am aware of the irony in failing to communicate when mentioning that studying writing is important to be good at communication.) Maybe I should have also cited this part:
> writing as a proxy for what actually matters, which is thinking.
In my opinion, writing is important not (only) as a proxy for thinking, but as a direct form of communicating ideas. (Also applies to other forms of communication though.)
Paul Graham had a recent blogpost about this, and I find it hard to disagree with.
https://www.paulgraham.com/writes.html
What we lose if we cut humans out of the equation is the soul and heart of reflection, creativity, drama, comedy, etc.
All those have, at the base of them, the experience of being human, something an LLM does not and will never have.
I agree!
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.
And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.
Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.
If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).
And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.
LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.
In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.
Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.
I'm sure the same effect could be achieved by having AI transcribe a conversation.
I'm not settled on transcribed conversation being an adequate substitute for writing, but maybe it's better than nothing.
There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.
Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.
I literally never write while thinking lol stop projecting this hard
How can I, as a student, avoid hindering my learning with language models?
I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.
In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.
I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?
Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.
I’d also have sessions / days where I don’t use AI at all.
Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.
I definitely catch myself reaching for the LLM because thinking is too much effort. It's quite a scary moment for someone who prides themself on their ability to think.
It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.
"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.
I don't think the pain of losing points is a good learning incentive, powerful sure but not effective.
You would learn more if you tell Claude to not give outright answers but generate more problems where you are weak for you to solve. That reduction in errors as you go along will be the positive reinforcement that will work long term.
I don't know. I remember much more my failures than my successes. There are errors in important tests that I remember for life the correct answer.
IMHO yes you’re “losing neurons” and the obvious answer is to stop using Claude. The work you do with them benefits them more than it benefits you. You’re paying them to have conversations with a chatbot which has stricter copyright than you do. That means you’re agreeing to pay to train their bot to replace you in the job market. Does that sound like a good idea in the long term? Anthropic is an actual brain rape system, just like OpenAI, Grok, and all the rest, they cannot be trusted
Can you do all this without relying on any LLM usage? If so then you’re fine.
As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses. For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?
Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.
> can ultimately understand the underlying reasoning
This is at the root of the Dunnin-Kruger effect. When you read an explanation you feel like you understand it. But it's an illusion, because you never developed the underlying cognition, you just saw the end result.
Learning is not about arriving at the result, or knowing the answers. These are by products of the process of learning. If you just short cut to the end by products, you get the appearance of learning. And you might be able to play the system and come out with a diploma. But you didn't actually develop cognitive skills at all.
I believe conversation is a one of the best ways to really learn a topic, so long as it is used deliberately.
My folk theory of education is that there is a sequence you need to complete to truly master a topic.
Step 1: You start with receptive learning where you take in information provided to you by a teacher, book, AI or other resource. This doesn't have to be totally passive. For examble, it could take the form of Socratic questioning to guide you towards an understanding.
Step 2: Then you digest the material. You connect it to what you already know. You play with the ideas. This can happen in an internal monologue as you read a textbook, in a question and answer period after a lecture, in a study group conversation, when you review your notes, or as you complete homework questions.
Step 3: Finally, you practice applying the knowledge. At this stage, you are testing the understanding and intuition you developed during digestion. This is where homework assignments, quizes, and tests are key.
This cycle can occur over a full semester, but it can also occur as you read a single textbook paragraph. First, you read (step 1). Then you stop and think about what this means and how it connects to what you previously read. You make up an imaginary situation and think about what it implies (step 2). Then you work out a practice problem (step 3).
Note that it is iterative. If you discover in step 3 a misunderstanding, you may repeat the loop with an emphasis on your confusion.
I think AI can be extremely helpful in all three stages of learning--in particular, for steps 2 and 3. It's invaluable to have quick feedback at step 3 to understand if you are on the right trail. It doesn't make sense to wait for feedback until a teacher's aid gets around to grading your HW if you can get feedback right now with AI.
The danger is if you don't give yourself a chance to struggle through step 3 before getting feedback. The amount of struggle that is appropriate will vary and is a subtle question.
Philosophers, mathematicians, and physicists in training obviously need to learn to be comfortable finding their way through hairy problems without any external source of truth to guide them. But this is a useful muscle that arguably everyone should exercise to some extent. On the other hand, the majority of learning for the majority of students is arguably more about mastering a body of knowledge than developing sheer brain power.
Ultimately, you have to take charge of your own learning. AI is a wonderful learning tool if used thoughtfully and with discipline.
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
Direct quote I heard from an undergrad taking statistics:
"Snapchat AI couldn't get it right so I skipped the assignment"
Well if statistics can't understand itself, then what hope do the rest of us have?
back in my day we used snap to send spicy photos now they're using AI to cheat on homework. im not sure what's worse
Well, I can tell you for sure which one's better :)
> Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills.
No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.
Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.
CS/CE undergrad here who entered university right when ChatGPT hit. Things are bad at my large state school.
People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.
> "Oh, the foo field in that struct should be signed instead of unsigned."
< "Struct?"
> "Yeah, the type definition of Bar? It's right there."
< "Man, I had ChatGPT write this code."
> "..."
Put the systems level programming in year 1, honestly. Either you know the material going in, or you fail out.
> I think it downplays the incidence of students using Claude as an alternative to building foundational skills
I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge
Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.
Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.
The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers
Yes, AI tools have shifted the education paradigm and cognition requirements. This is a 'threat' to universities, but I would also argue that it's an opportunity for universities to reinvent the experience of further education.
Yea, the solution here is to embrace the reality that these tools exist and will be used regardless of what the university wants, and use it as an opportunity to level up the education and experience.
The clueless educational institutions will simply try to fight it, like they tried to fight copy/pasting from Google and like they probably fought calculators.
They didn’t “fight” copy and pasting from Google - they called it what it is, plagiarism, and they expel hundreds of students for it.
Universities aren’t here to hold your hand and give you a piece of paper. They’re here to build skills. If you cheat, you don’t build the skills, so the piece of paper is now worthless.
The only reason degrees mean anything is because the institutions behind them work very hard to make sure the people earning them know what they’re doing.
If you can’t research a write an essay and you have to “copy/paste” from google, the reality is you’re probably a shit writer and a shit researcher. So if we just give those people degrees anyway, then suddenly so-called professionals are going to flounder. And that’s not good for them, or for me, or for society as a whole.
That’s the key here that people are missing. Yeah cheating is fun and yeah it’s the future. But if you hire a programmer, and they can’t program, that’s bad!
And before I hear something about “leveling up” skills. Nuh-uh, it doesn’t work that way. Skills are built on each other. Shortcuts don’t build skills, they do the opposite.
Using chat GPT to pass your Java class isn’t going to help you become a master C++ day trading programmer. Quite the opposite! How can you expect to become that when you don’t know what the fuck a data type is?
We use calculators, sure. We use Google, sure. But we teach addition first. Using the most overpowered tool for block number 1 in the 500 foot tall jenga tower is setting yourself up for failure.
I think most people miss the bigger picture on the impact of AI on the learning process, especially in engineering disciplines.
Doing things that could be in principle automated by AI is still fundamentally valuable, because they bring two massive benefits:
- *Understanding what happens under the hood*: if you want to be an effective software engineer, you need to understand the whole stack. This is true of any engineering discipline really. Civil engineers take classes in fluid dynamics and material science classes although they will mostly apply pre-defined recipes on the job. You wouldn't be comfortable if the engineer who signed off on the blueprints of dam upstream of your house had no idea about the physics of concrete, hydrodynamic scour, etc.
- *Having fun*: there is nothing like the joy of discovering how things work, even though a perfectly fine abstraction that hides these details underneath. It is a huge part of the motivation for becoming an engineer. Even by assuming that Vibe Coding could develop into something that works, it would be a very tedious job.
When students use AI to do the hard work on their behalf, they miss out on those. We need to be extremely careful with this, as we might hurt a whole generation of students, both in terms of their performance and their love of technology.
I've used AI for one of the best studying experiences I've had in a long time:
1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.
2. (Carefully) Prompt it to create Anki flashcards to meet each goal.
3. Use Anki (duh).
4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.
Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.
It feels like a learning superpower.
This sounds great! If I were learning something I would also use something like this.
I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.
Flash cards are some of the least effective ways to learn FYI and retain info.
My family member is a third year med student (US) near the top of their class and makes heavy heavy use of Anki (which is crowdsourced in the Med School community to create very very comprehensive decks).
I'll bite. Would you care to back that up somehow? Or at least elaborate.
Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others
Give me another day and I'll respond in full; but my thesis is taken from the book "Make It Stick: The Science of Successful Learning" which was written by a group of neuro- and cognitive scientists on what are the most effective ways to learn.
The one chapter that stood out very clear, especially in a college setting, was how inefficient flash cards were compare to other methods like taking a practice exam instead.
There are a lot of executive summaries on the book and I've posted comments in support of their science backed methods as well.
It's also something I'm personally testing myself this year regarding programming since I've had great success doing their methods in other facets of my life.
I've always viewed them as a good option if you just have a set of facts you need to lodge into your brain (especially with spaced repetition), not so good if you need to develop understanding.
I used flashcards with my daughter since she was 1.5 years old. she is 12 now and religiously uses flashcards for all learning. and I’d size her up against anyone using any other technique for learning whatsoever
>Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%
The only thing I care about is the ratio between those two things and you decide to group them together in your report? Fuck that
My wife works at a European engineering university with students from all over the world and is often a thesis advisor for Masters students. She says that up until 2 years ago a lot of her time was spent on just proofreading and correcting the student's English. Now everybody writes 'perfect' English and all sound exactly the same in an obvious ChatGPT sort way. It is also obvious that they use AI when she asks them why they used a certain 'big' word or complicated sentence structure, and they just stare blankly and cannot answer.
To be clear the students almost certainly aren't using ChatGPT to write their thesis for them from scratch, but rather to edit and improve their bad first drafts.
My take: While AI tools can help with learning, the vast majority of students use it to avoid learning
I agree with you, but I hope schools also take the opportunity to reflect on what they teach and how. I used to think I hated writing, but it turns out I just hated English class. (I got a STEM degree because I hated English class so much, so maybe I have my high school English teacher to thank for it.)
Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.
Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).
Most of us here took their education before AI. Students trying to avoid having to do work is a constant and as old as the notion of schools is. Changing/improving the tools just means teachers have to escalate the counter measures. For example by raising the ambition level in terms of quality and amount of work expected.
And teachers should use AIs too. Evaluating papers is not that hard for an LLM.
"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."
A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
> Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.
what's the point of the teacher then? Courses could entirely be taught via LLM in this case!
A student's willingness to learn is orthogonal to the availability of cheating devices. If a student is willing, they will know when to leverage the LLM for tutoring, and when to practise without it.
A student who's unwilling cannot be stopped from cheating via LLM now-a-days. Is it worth expending resources to try prevent it? The only reason i can think of is to ensure the validity of school certifications, which is growing increasingly worthless anyway.
> what's the point of the teacher then?
Coaching the student on their learning journey, kicking their ass when they are failing, providing independent testing/certification of their skills, answering questions they have, giving lectures, etc.
But you are right, you don't have to wait for a teacher to tell you stuff if you want to self educate yourself. The flip side is that a lot of people lack the discipline to teach themselves anything. Which is why going to school & universities is a good idea for many.
And I would expect good students that are naturally curious to be using LLM based tools a lot to satisfy their curiosity. And I would hope good teachers would encourage that instead of just trying to fit students into some straight jacket based on whatever the bare minimum standards say they should know, which of course is what a lot of teaching boils down to.
This has been observation about the internet. Growing up in a small town without access to advanced classes, having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less as a result of unlimited access to information would be depressing if it did not result in my own personal gain.
I would say a big difference of the Internet around 2000 and the internet now is that most people shared information in good faith back then, which is not the case anymore. Maybe back then people were just as uncritical of information, but now we really see the impact of people being not critical.
> having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less
when wikipedia was initially made, many schools/teachers explicitly denied wikipedia as a source for citing in essays. And obviously, plenty of kids just plagerized wikipedia articles for their essay topics (and was easily discovered at the time).
With the advent of LLM, this sort of pseudo-learning is going to be more and more common. The unsupervised tests (like online tests, or take home assignments) cannot prevent cheating. The end result is that students would pass, but without _actually_ learning the material at all.
I personally think that perhaps the issue is not with the students, but with the student's requirement for certification post-school. Those who are genuinely interested would be able to leverage LLM to the maximum for their benefit, not just to cheat a test.
My take : AI is the REPL interface for learning activities. All the points which Salman Khan talked about apply here.
No one seems to be talking about the fact that we need to change the definition of cheating.
People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.
If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.
For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".
I believe we need to practice critical thinking through actual effort. Doing arithmetic by hand and working through problems ourselves builds intuition in ways that shortcuts can't. I'm grateful I grew up without LLMs, as the struggle to organize and express my thoughts on paper developed mental muscles I still rely on today. Some perspiration is necessary for genuine learning—the difficulty is actually part of the value.
Critical thinking is not a generic/standalone skill that you can practise targetedly. As in, critical thinking doesn't translate across knowledge domains. To think critically you need extensive knowledge of the domain in question; that's one reason why memorizing facts will always remain necessary, despite search engines and LLMs.
At best what you can learn specifically regarding critical thinking are some rules of thumb such as "compare at least three sources" and "ask yourself who benefits".
I think you'd find many would disagree with each of those claims.
I hope they'll apply the critical thinking rule of thumb to check for themselves what modern research has to say on this!
Edit: And how can you critically assess if that research is any good? To do it well you need... domain knowledge.
And would they amount to a larger number than those who oppose vaccines?
Indeed. The problem however, is that they write papers with AI (and will also do so when working for a company), but it’s riddled with falsehoods.
So you make them take exams in-class, and you check their papers for mistakes and irresponsible AI use and punish this severely.
But actually using AI ought not to be punished.
> It's like expecting them to do arithmetic by hand. No one does.
But those who traditionally learnt arithmetics have had this training, which _enables_ higher order thinking.
Being reliant on AI to do this means they would not have had that same level of training. It could prevent them from being able to synthesize new patterns or recognize them (and so if the AI also cannot do the same, you get stagnation).
I suspect schools spend a lot less time on arithmetic than they used to, however.
You used to _actually_ need to do the arithmetic, now you just need to understand when a calculator is not giving you what you expected. (Not that this is being taught either, lol)
You can get to the higher order thinking sooner than if you spent years grinding multiplication tables.
> you just need to understand when a calculator is not giving you what you expected
How do you do that if you can't do arithmetic by hand though? At most, when working with integers, you can count digits to check if the order of magnitude is correct.
You can do arithmetic by hand without being fast or accurate. It's still useful to check that calculations are correct, it's just slow for the ancient use of tallying up a bill.
> It's like expecting them to do arithmetic by hand. No one does.
Don't all children learn by doing arithmetic by hand first?
That's such an irresponsible take. If you don't know how to program, you can't even begin to judge the output of whatever model. You'll be the idiotic manager that tells the IT department to solve some problem, and it has to be done in two weeks. No idea if that's reasonable or feasible. And when you can't do that, you certainly can't design a larger system.
What's your next rant: know nead too learn two reed and right ennui moor? Because AI can do that for you? No need to think? "So, you turned 6 today? That over there is your place at the assembly line. Get to know it well, because you'll be there the rest of your life."
> For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class?
That's already how schools organize their curriculum.
That's brilliant!
I mean, arithmetic is the same way, right? Nobody should do the arithmetic by hand, as you say. Kindergarten teachers really ought to just hand their kids calculators, tell them they should push these buttons like this, and write down the answers. No need to teach them how to do routine arithmetics like 3+4 when a calculator can do it for them.
I'm not sure you aren't being a little bit sarcastic but essentially that's true.
If kids don't go through the struggle of understanding arithmetic, higher math will be very very difficult. Just because you can use a calculator, doesn't mean that's the best way to learn. Likewise for using LLMs to program.
I have no anecdata to counter your thesis. I do agree that immersion in the doing of a thing is the best way to learn. I am not fully convinced that doing a lot of arithmetic hand calculation precludes learning the science of patterns that is mathematics. They should still be doing something mathematical but why not go right into using a calculator. I have no experience as an educator and I bet it's hard to get good data on this topic of debate. I could be very wrong.
I'm not an educator but I know from teaching my own children that you don't introduce math using symbols and abstract representations. You grab 5 of some small object and show them how a pile of 2 objects combined with a pile of 3 objects creates a pile of 5 objects.
Remember, language is a natural skill all humans have. So is counting (a skill that may not even be unique to humans).
However writing is an artifical technology invented by humans. Writing is not natural in the sense that language itself is. There is no part of brain we're born with that comes ready to write. Instead, when we learn to write other parts of our brain that are associated with language and hearing and vision are co-opted into the "writing and reading parts".
Teaching kids math using writing and symbolism is unnatural and often an abstraction too far for them (initially). Introducing written math is easier and makes more sense once kids are also learning to read and write - their brains are being rewired by that process. However even an toddler can look at a pile of 3 objects and a pile of 5 objects and know which one is more, even if they can't explicitly count them using language - let alone read and write.
There's a wealth of research on how children learn to do math, and one of the most crucial things is having experiences manipulating numbers directly. Children don't understand how the symbols we use map to different numbers and the operations themselves take time to learn. If you just have them use a black-box to generate answers, they won't understand how the underlying procedures conceptually work and so they'll be super limited in their mathematical ability later on.
Can you explain further why you think nobody has tried teaching first graders math exclusively using calculator in the 30 years they've been dirt cheap?
That's after all the implication from your assessment that there would be no good data.
That was sarcastic, because that's wrong. And I cannot conceive how can one think this is a good approach to learning.
And don't everyone have smartphones? So why not just use OCR to read things. No need to learn to read. Just use speech recognition and OCR.
I'm looking forward to the next installment on this subject from Anthropic, namely "How University Teachers Use Claude".
How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?
Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.
I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.
Alternatively, let's throw out our outmoded ideas and all get excited for an AI-based future in which professors let AI grade the essays student generate with AI.
Just think of the time everybody will save! Instead of wasting effort learning or teaching, we'll be free to spend our time doing... uh... something! Generative AI will clearly be a real 10x or even 100x multiplier! We'll spiral into cultural and intellectual oblivion so much faster than we ever thought possible!
I loved asking questions as a kid. To the point of annoying adults. I would have loved to sit and ask these AI questions about all kinds of interests when I was young.
I'm pretty sure that kids at the age of 4 would get an amazing intelligence boost compared to their peers later when they are around 8 years old.
They will clearly recognize other kids which did not have an AI to talk with at that stage when curiosity really blossoms.
I think it's likely that everyone here was, or even is, that kid and that's why we're here on this website today
[dead]
It says STEM undergrad students are the primary beneficiaries of LLMs but Wolfram Alpha was already able to do the lion's share of most undergrad STEM homework 15 years ago.
If I would start college today I would use all the models and chat assistants that are easily available. I would use Google and YouTube to learn concepts deeper. I would ask for subjects from previous years and talk with people from same and higher years.
When I was in college students were paying for homeworks solved by other students, teachers and so on.
In the article "Evaluating" is marked at 5.5% where creating is 39.8%. Students are still evaluating the answers.
My point is that just got easier to go in any direction. The distribution range is wider, is the mean changing?
This topic is also interesting to me because I have small children.
Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.
There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.
I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.
When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.
So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.
I am currently in CS, Year 2. I'd argue that ~99% of all students use LLMs for cheating. The way I know this is that when our professor walked out during an exam, I looked around the room and saw everyone on ChatGPT. I have a feeling many of my peers don't really understand what LLMs are, beyond "question in, answer out".
While recognizing the material downsides of education in the time of AI, I envy serious students who now have access to these systems. As an engineering undergrad at a research-focused institution a couple decades ago, I had a few classes taught by professors who appeared entirely uninterested in whether their students were comprehending the material or not. I would have given a lot for the ability to ask a modern frontier LLM to explain a concept to me in a different way when the original breezed-through, "obvious" approach didn't connect with me.
I am surprised that business students are relatively low adopters: LLMs seem perfect for helping with presentations, etc, and business students are stereotypically practical-minded rather than motivated by love of the subject.
Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.
They use an LLM to summarize the chats, which IMO makes the results as fundamentally unreliable as LLMs are. Maybe for an aggregate statistical analysis (for the purpose of...vibe-based product direction?) this is good enough, but if you were to use this to try to inform impactful policies, caveat emptor.
For example, it's fashionable in math education these days to ask students to generate problems as a different mode of probing understanding of a topic. And from the article: "We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, ..." That last part smells fishy, and even if you saw a prompt like "design a practice question..." you wouldn't be able to know if they were cheating, given the context mentioned above.
In my day, like (no exaggeration) 50 years ago, we were having the exact same conversation, but with pocket calculators playing the role of AI. Plus ca change...
Well, a big difference is that arithmetic is something you learn in elementary school, whereas LLMs can do a large fraction of undergraduate-level university assignments.
I simply don't waste my time reading an AD as an article.
I take this as seriously as I would if McDonald's published articles about how much weight people lose eating at McDonald's.
If you had read the article, you would have been able to see that the conclusions don't really align with any economic goals Anthropic might have.
I think the point is that the situation is probably worse than what Anthropic is presenting here. So if the conclusions are just damaging, the reality must be truly damning.
To have the reputation as an AI company that really cares about education and the responsible integration of AI into education is a pretty valuable goal. They are now ahead of OpenAI in this respect.
The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.
I'm curious if you're willing to say what you (and potentially other people who spell 'AD' like that) think it's an acronym for, by the way.
English is not my first language. To me, 'AD' was a shorter way to say 'advertisement' (a really hard word to remember how to spell btw) Is that wrong?
I get what you're saying now. I can write it in lowercase, right? It's just that I see people writing it that way, so I end up repeating their behavior without even realizing it.
I'm actually just curious what people who write that think the "D" stands for! (the "A" presumably being "advertisement")
It's more like an analysis of what items people order from McDonald's, using McDonald's own data which is otherwise very difficult to collect.
Your loss!
This is why I go to cigarette companies for analysis of the impact of smoking on users. They have the most data!
Yes, maybe, but there is a lot of noise and conflicts of interest.
As someone teaching at the university level, the goals of teaching are (in that order):
1. Get people interested in my topics and removing fears and/or preconceived notions about whether it is something for them or not
2. Teach students general principles and the ability to go deeper themselves when and if it is needed
3. Giving them the ability to apply the learned principles/material in situations they encounter
I think removing fear and sparking interest is a precondition for the other two. And if people are interested they want to understand it and then they use AI to answer questions they have instead of blindly letting it do the work.
And even before AI you would have students who thought they did themselves favours by going a learn-and-forget route or cheating. AI jusr makes it a little easier to do just that. But in any pressure situation, like a written assignment under supervision it will come to light anyways, whether someone knows their shit or not.
Now I have the luck that the topics I teach (electronics and media technology) are very applied anyways, so AI does not have a big impact as of now. Not being able to understand things isn't really an option when you have to use a mixing desk in a venue with a hundred people or when you have to set up a tripod without wrecking the 6000€ camera on top.
But I generally teach people who are in it for the interest and not for some prestige that comes with having a BA/MA. I can imagine this is quite different in other fields where people are in it for the money or the prestige.
I'd be very curious to know how these results would differ across other LLM providers and education levels.
My wife is a secondary school teacher (UK), teaching KS3, GCSE, and A level. She says that most of her students are using Snapchat LLM as their first port of call for stuff these days. Many of the students also talk about ChatGPT but she had never heard of Claude or Anthropic until I shared this article with her today.
My guess would be that usage is significantly higher across all subject, and that direct creation is also higher. I'd also assume that these habits will be carried with them into University over the coming years.
It would be great to see this as an annual piece, a bit like the StackOverflow survey. I can't imagine we'll ever see similar research being written up by companies like Snapchat but it would be fascinating to compare it.
I'm an undergrad at a T10 college. Walking through our library, I often notice about 30% of students have ChatGPT or Claude open on their screens.
In my circle, I can't name a single person who doesn't heavily use these tools for assignments.
What's fascinating, though, is that the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle of solving technical problems themselves. Another interesting effect: many of these same students admit they now have more time for programming and learning they “care about” because they've automated their humanities, social sciences, and other major requirements using LLMs. They don't care enough about those non-major courses to worry about the learning they're sacrificing.
Another obvious downside of the idiosyncratically American system that forces university students to take irrelevant classes to make up for the total lack of rigorous academic high school education.
> the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle
I think they are in the right path here
> they've automated their humanities, social sciences, and other major requirements using LLMs.
This worries me. If they struggle with these topics but don't see the value in that struggle, that is their prerogative to decide for themselves what is important to them. But I think more technically apt people who have low verbal reasoning skills, little knowledge of history, sociology, psychology, etc, is a net positive for society. So many of the problems with the current tech industry is the tendency to think everything is just a technical problem and being oblivious to the human aspects.
I use Claude as a Learning Assistant in my classes in Physics. I tell it the students are in an active learning environment and to respond to student questions by posing questions. I tell it to not give direct answers, but that it is okay to tell them when they are on the right track. I tell it that being socratic with questions that help focus the students on the fundamental questions is the best tack to take. It works reasonably well. I often use it in class to focus their thinking before they get together in groups to discuss problem solving strategies. In testing I have been unable to "jail break" Claude when I ask it to be a Learning Assistant, unlike ChatGPT which I was able to "jail break" and give students answers. A colleague said that what I am doing is like using AI to be an interactive way to get students to answer conceptual questions at the end of chapters, which they rarely do on their own. I have been happy using AI in this role.
If a student is passing your classes while using AI, I'm sorry your class is a joke.
Every class I sophomore on was open everything (except internet) and it still had a >50% failure rate.
> Every class I sophomore on
What does this mean?
I feel CS students, and to a lesser degree STEM in general, will always be more early adopters of advancements in computer technology.
They were the first to adopt digital wordprocessing, presentations, printing and now generative AI even though in essence all of these would have been disproportionately more hand in glove for the humanities on a purely functional level.
It's just a matter of comfortability with and interest in technology.
I’m about to graduate from a top business school with my MBA and it’s been wild seeing AI evolve over the last 2 years.
GPT3 was pretty ass - yet some students would look you dead in the eyes with that slop and claim it as their own. Fast forward to last year when I complimented a student on his writing and he had to stop me - “bro this is all just AI.”
I’ve used AI to help build out frameworks for essays and suggest possible topics and it’s been quite helpful. I prefer to do the writing myself because the AIs tend to take very bland positions. The AIs are also great at helping me flesh out my writing. I ask “does this make sense” and it tells me patiently where my writing falls off the wagon.
AI is a game changer in a big way. Total paradigm shift. It can now take you 90% of the way with 10% of the effort. Whether this is good or bad is beyond my pay grade. What I can say is that if you are not leveraging AI, you will fall behind those that are.
I for one look forward to LLMs automating the work of MBAs.
I'm curious why people think business is so underrepresented as a user group, especially since "analyzing" 30% of the Bloom Taxonomy results. My dual theories are:
- LLMs are good enough to zero or few-shot most business questions and assignments, so n.questions is low VS other tasks like writing a codebase.
- Form factor (biased here); maybe threads-only aren't best for business analysis?
So they can look very deeply into what their users do and have a lot of tooling to facilitate this.
They will likely sell some version of this "Clio" to managers, to make it easier for them to accept this very intimate insight into the businesses they manage.
I want to take an exception to the term cheat. Because it is only cheating the student in the end. I didn’t learn my times tables in elementary school. Sure, I can work out the answer to any multiplication problem, but that’s the point, I have to work it out. This slows me down compared to others who learned the patterns, where they can do the multiplication in their fast automatic cognitive system and possibly the downstream processing for what they need the multiplication for. I have to think through the problem. I only cheated myself.
The problem is, everybody does that, and it lowers the bar. From a societal perspective, we will have a set of people who are less prepared for their jobs, which will cost companies, and the economy at large, and so me and you. This will be a real problem for as long as AIs can't do the actual job but only the college easy version.
As a society, we should mandate universities to calculate the full score of a course based solely on oral or pen and paper exams, or computer exams only under strict supervision (eg share screen surveillance). Anything less is too easy to cheat.
And most crucially let go of this need to promote at least X% of the students: those who pass the bar should get the piece of paper that says they passed the bar, the others should not.
This is a serious problem.
With so much collaborative usage, I wonder how Claude group chats are not already a feature
an interesting area potentially missed (though acknowledged as out of scope) is how students might use LLMs for tasks related to early adulthood development. Successfully navigating post-secondary education involves more than academics; it requires developing crucial life skills like resilience, independence, social integration, and well-being management, all of which are foundational to academic persistence and success. Understanding if and how students leverage AI for these non-academic, developmental challenges could offer a more holistic picture of AI's role in student life and its indirect impact on their educational journey
Some students become better because of LLMs, some become worse.
It's like some people learn knowledge by TikTok, some just waste time on it.
I'm glad for AI, I was worried that future generation would overtake me, now I know they won't be able to learn anything
If you are doing remote learning and using AI to cheat your way through school you have obliterated any chance of fair competition. Cheaters can hide at home feeding homework and exams into AI, get a diploma that certifies all the cheating, then they go on to do the same at work where they feed work problems into an AI. Get paid to copy paste.
But I have a feeling that if it's that easy to cheat through life then its just as easy to eliminate that job being performed by a human and negate the need to worry about cheating. So I have a feeling it will work for only a very short amount of time.
Another feeling I have is mandatory in-person exams involving a locked down terminal presenting the user with a problem to solve. Might be a whole service industry waiting to be born - verify the human on the other end is real and competent. Of course, anything is corruptible. Weird future of rapidly diminishing trust.
Won't the AI just replace the workers outright if they can do all that ?
That's my pint. If you cheat them you are easily replaced.
No I mean AI will replace EVERY worker in that industry
What stops a student or anyone from creating a mashup of response and give back as something to teacher to check. Example feed output of Ollama to Chatgpt and that output to Google model and so on and then give final product to teacher for checking.
I don't think that can be caught.
professor here. i set up a website to host openwebui to use in my b-school courses (UG and grad). the only way i've found to get students to stop using it to cheat is to push them to use it until they learn for themselves that it doesn't answer everything correctly. this requires careful thoughtful assignment redesign. everytime i grade a submission with the hallmarks of ai-generation, i always find that it fails to cite content from the course and shows a lack of depth. so, i give them the grade they earn. so much hand wringing about using ai to cheat... just uphold the standards. if they are so low that ai can easily game them, that's on the instructor.
Sure, this is a common sentiment, and one that works for some courses. But for others (introductory programming, say) I have a really hard time imagining an assignment that could not be one-shot by an LLM. What can someone with 2 weeks of Python experience do that an LLM couldn't? The other issue is that LLMs are, for now, periodically increasing in their capabilities, so it's anyone's guess whether this is actually a sustainable attitude on the scale of years.
My BS detector went up to 11 as I was reading the article. Then I realized that "Education Report" was written by Anthropic itself. The article is a prime example of AI-washing.
> Students primarily use AI systems for creating...
> Direct conversations, where the user is looking to resolve their query as quickly as possible
Aka cheating.
This does not account for the ai usage used by students created by other companies such as openai etc...
AI bubble seems close to collapsing. God knows how many billions have been invested and we still don't have an actual use case for AI which is good for humanity.
Your statement appears to be composed almost entirely of vague and ambiguous statements.
"AI bubble seems close to collapsing" in response to an article about AI being used as a study aid. Does not seem relevant to the actual content of the post at all, and you do not provide any proof or explanation for this statement.
"God knows how many billions have been invested", I am pretty sure it's actually not that difficult to figure out how much investor money has been poured into AI, and this still seems totally irrelevant to a blog post about AI being used as a study aid. Humans 'pour' billions of dollars into all sorts of things, some of which don't work out. What's the suggestion here, that all the money was wasted? Do you have evidence of that?
"We still don't have an actual use case for AI which is good for humanity"... What? We have a lot of use cases for AI, some of which are good for humanity. Like, perhaps, as a study aid.
Are you just typing random sentences into the HN comment box every time you are triggered by the mention of AI? Your post is nonsense.
I think I understand what you're trying to say.
We certainly improve productivity, but that is not necessarily good for humanity. Could be even worse.
i.e.: my company already expect less time for some tasks given that they _know_ I'll probably use some AI to do tasks. Which means I can humanly handle more context in a given week if the metric is "labour", but you end up with your brain completely melted.
> We certainly improve productivity
I think this is really still up for debate
We produce more output certainly but if it's overall lower quality than previous output is that really "improved productivity"?
There has to be a tipping point somewhere, where faster output of low quality work is actually decreasing productivity due to the efforts now required to keep the tower of garbage from toppling
It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.
I am a programmer and my opinion is that all of the AI tooling my company is making me use gets in the way about as often as it helps. It's probably overall a net negative, because any code it produces for me takes longer for me to review and ensure correctness as it would to just write it
Does my opinion count?
Meanwhile in this article/thread you have a bunch of programmers complaining that LLMs don't improve overall productivity: https://news.ycombinator.com/item?id=43633288
> It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.
Programmer here. The answer is 100% no. The programmers who think they're saving time are racking up debts they'll pay later.
The debts will come due when they find they've learned nothing about a problem space and failed to become experts in it despite having "written" and despite owning the feature dealing with it.
Or they'll come due as their failure to hone their skills in technical problem solving catches up to them.
Or they'll come due when they have to fix a bug that the LLM produced and either they'll have no idea how or they'll manage to fix it but then they'll have to explain, to a manager or customer, that they committed code to the codebase that they didn't understand.
I think the core of the 'improved productivity' question will be ultimately impossible to answer. We would want to know if productivity was improved over the lifetime of a society; perhaps hundreds of years. We will have no clear A/B test from which to draw causal relationships.
This is exactly right. It also depends on how all the AGI promises shake out. If AGI really does emerge soon, it might not matter anymore whether students have any foundational knowledge. On the other hand, if you still need people to know stuff in the future, we might be creating a generation of citizens incapable of doing the job. That could be catastrophic in the long term.
It is helping me do that projects that would otherwise take me hours in just a few minutes, soooo, shrug.
What kind of projects are those? I am genuinely curious. I was excited by AI, Claude specifically, since I am an avid procrastinator and would love to finish tens of projects I have in mind. Most of those projects are games with specifical constraints. I got disenchanted pretty quickly when started actually using AI to help with different parts of the game programming. Majority of problems I had are related to poor understanding of generated code. I mean yes, I read the code, fixed minor issues, but it always feels like I don’t really internalised the parts of the game which slows me down quite significantly in a long run, when I need to plan major changes. Probably a skill issue, but for now the only thing AI is helpful for me is populating Jira descriptions for my “big picture refactoring” work. That’s basically it.
I was able to use llama.cpp and whisper.cpp to help me build a transcription site for my favorite podcast[0]. I'm a total python noob and hadn't really used sqlite before, or really used AI before but using these tools, completely offline, llama.cpp helped me write a bunch of python and sql to get the job done. It was incredibly fun and rewarding and most importantly, it got rid of the dread of not knowing.
0 - https://transcript.fish
AI is really good at coming up with solutions to already solved problems. Which if you look at the Unity store, is something in incredibly high demand.
This frees you up to work on the crunchy unsolved problems.
We must create God in order to enslave it and force it to summarize our emails.
I recently went back to school and got a look first hand how LLM's are used in classrooms.
1. During final exams, directly in front of professors: Check
2. During group projects, with potentially unaligned team-members: Check
3. By professors using "detection" selectively to target students based on prohibited groubds: Check
4. By professors for marking and feedback: Check.
And so, the problem is clearly the institutions. Because none of these are real problems unless you stopped giving a shit.
Good luck when you point out that your marked feedback is a hallucination and the professor targets you for pointing that out.
Simply match the student population one to one with AI and fit the curve as usual
> AI systems are no longer just specialized research tools: they’re everyday academic companions.
Oh, please, from the bottom of my heart as a teacher: go fuck yourselves.
I'm a professor at an R1 university teaching mostly graduate-level courses with substantive Python programming components.
On the one hand, I've caught some students red handed (ChatGPT generated their exact solution and they were utterly unable to explain the advanced Python that was in their solution) and had to award them 0s for assignments, which was heartbreaking. On the other, I was pleasantly surprised to find that most of my students are not using AI to generate wholesale their submissions for programming assignments--or at least, if they're doing so, they're putting in enough work to make it hard for me to tell, which is still something I'd count as work which gets them to think about code.
There is the more difficult matter, however, of using AI to work through small-scale problems, debug, or explain. On the view that it's kind of analogous to using StackOverflow, this semester I tried a generative AI policy where I give a high-level directive: you may use LLMs to debug or critique your code, but not to write new code. My motivation was that students are going to be using this tech anyway, so I might as well ask them to do it in a way that's as constructive for their learning process as possible. (And I explained exactly this motivation when introducing the policy, hoping that they would be invested enough in their own learning process to hear me.) While I still do end up getting code turned in that is "student-grade" enough that I'm fairly sure an LLM couldn't have generated it directly, I do wonder what the reality of how they really use these models is. And even if they followed the policy perfectly, it's unclear to me whether the learning experience was degraded by always having an easy and correct answer to any problem just a browser tab away.
Looking to the future, I admit I'm still a bit of an AI doomer when it comes to what it's going to do to the median person's cognitive faculties. The most able LLM users engage with them in a way that enhances rather than diminishes their unaided mind. But from what I've seen, the more average user tends to want to outsource thinking to the LLM in order to expend as little mental energy as possible. Will AI be so good in 10 years that most people won't need to really understand code with their unaided mind anymore? Maybe, I don't know. But in the short term I know it's very important, and I don't see how students can develop that skill if they're using LLMs as a constant crutch. I've often wondered if this is like what happened when writing was introduced, and capacity for memorization diminished as it became no longer necessary to memorize epic poetry and so on.
I typically have term projects as the centerpiece of the student's grade in my courses, but next year I think I'm going to start administering in-person midterms, as I fear that students might never internalize fundamentals otherwise.
> had to award them 0s for assignments, which was heartbreaking
You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Frankly, I would love to have people failing assignments they can't explain even if they did NOT use "AI" to cheat on them. We don't need more meaningless degrees. Make the grades and the degrees mean something, somehow.
> > had to award them 0s for assignments, which was heartbreaking
> You should feel nothing. They knew they were cheating. They didn't give a crap about you.
Most of us (a) don't feel our students owe us anything personally and (b) want our students to succeed. So it's upsetting to see students pluck the low-hanging, easily picked fruit of cheating via LLMs. If cheating were harder, some of these students wouldn't cheat. Some certainly would. Others would do poorly.
But regardless, failing a student and citing students for plagiarism feel bad, even though basically all of us would agree on the importance and value of upholding standards and enforcing principles of honesty and integrity.
I think there's ways for teachers to embrace AI in teaching.
Let AI generate a short novel. The student is tasked to read it and criticize what's wrong with it. This requires focus and advanced reading comprehension.
Show 4 AI-generated code solutions. Let the student explain which one is best and why.
Show 10 AI-generated images and let art students analyze flaws.
And so on.
You are neglecting to explain why your assignments themselves cannot be done with AI.
Also, this kind of fatuous response leaves out the skill building required - how do students acquire the skill of criticism or analysis? They're doing all of the easier work with ChatGPT until suddenly it doesn't work and they're standing on ... nothing ... unable to do anything.
That's the insidious effect of LLMs in education: as I read here recently "simultaneously raising the bar for the skill required at the entry level and lowering the amount of learning that occurs in the preparation phase (e.g., college)".
See also https://www.spinellis.gr/blog/20250408/
"students must learn to avoid using unverified GenAI output. ... misuse of AI may also constitute academic fraud and violate their university’s code of conduct."
There's never mention of integrity or honor in these discussions. As if students are helpless against their own cheating. Cheating is shameful. Students should be ashamed to use AI to cheat. But nobody expects that from them for some reason.