Physicist Tom Hartsfield, commenting on a new paper, takes issue with its claim that “Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies candidate sets of state variables.” That doesn’t seem to have happened.
The problem set for the computer was a classical mechanics one: For a pendulum hanging on another pendulum, compute the number of variables needed for a solution:
This problem requires two variables — the angle of each pendulum to the vertical — or four variables if a Cartesian (xy) coordinate system is used. If both pendulum bobs are hung from springs instead of rigid rods, the two variable spring lengths are added to get six variables in the Cartesian system.Tom Hartsfield, “No, AI did not discover a new type of physics” at Big Think (September 17, 2022) The paper is open access (for now).
It didn’t do very well:
For the rigid pendulum on a pendulum, it gave two answers: ~7 and ~4-5. (The correct answer is 4 variables.) Similarly, it calculated ~8 and ~5-6 for the double-spring pendulum. (The correct answer is 6 variables.) The researchers praise the smaller estimates as being near the true answers.Tom Hartsfield, “No, AI did not discover a new type of physics” at Big Think (September 17, 2022)
But when he examined the supplementary materials published with the paper, he discovered a bigger problem:
The computer didn’t actually calculate 4 variables and 6 variables. Its best calculations were 4.71 and 5.34.Tom Hartsfield, “No, AI did not discover a new type of physics” at Big Think (September 17, 2022)
These figures, respectively an intermediate undergraduate physics problem (four variables) and a more advanced one (six variables) do not round out to a correct answer.
The researchers hope to use the AI to analyze unknown systems but Hartsfield thinks it would be best to get a handle on the known systems first.
You may also wish to read: What does AI in education mean for critical thinking skills? Students, as reported at Motherboard, are increasingly using GPT-3 and other text-generator programs to write essays for them. I tested GPT-3 with two questions from the midterm examination that I recently gave to my introductory statistics class. Both GPT-3 answers were wrong. (Gary Smith)