Sci Foo Unconference: Horseshoe Crabs, Alchemy and (Of Course) AI
It’s called an “unconference” because attendees do not present papers in pre-organized sessions; they propose topics at the venue and those with most interest are selectedI’ve just returned from this year’s Sci Foo unconference at X (Google’s Moonshot Factory) in Mountain View, CA.
Sci Foo is an unusual scientific gathering in that it is invitation-only, free, and not restricted to any specific scientific field. There were around 200 researchers, close to half from outside the US, doing research in astrophysics, biophysics, marine biology, biomedicine, neuroscience, and, yes, even economics. Sci Foo is called an “unconference” because people do not present papers in pre-organized sessions, as at normal conferences. Instead, on the first day, they propose and make pitches for topics they would like to talk about. Attendees then sign up for the proposed sessions and those sessions with the most interest are scheduled.
Each session is an hour long, with the organizer making brief introductory remarks and then letting the 10‒20 people in the room discuss the topic among themselves. The organizer is essentially a facilitator charged with giving everyone who wants to speak a chance to do so.
There are also “lightning talks,” with each speaker given a firm maximum of five minutes to talk about the work they are doing and inviting other researchers to talk with them during meals.
Eclectic lightning
In addition to the expected research on cures for various ailments, many of the lightning talks were eclectic; for example, one featured a project using river flows and tree growth to measure time in new ways. Another speaker was passionate about protecting horseshoe crabs, which have been around, largely unchanged, for 445 million years, but are now endangered by overharvesting for biomedical research and bait.
Image Credit: jkraft5 - One speaker described a project to create a database of size measurements (height, length, width, and diameter) of all marine animal species. (There are currently 170,214 size measurements of 85,204 species, which represent 40% of all known marine animal species.)
Another lightning talk discussed a project for using nuclear reactions to create non-radioactive gold from other elements. Yes, alchemy! Scientists have evidently known that this process is theoretically possible for some time but it was inefficient and expensive. The speaker claimed that these obstacles were on the verge of being overcome.
Another speaker described his team’s work trying to determine whether Tugunbulak, a buried city in the mountains of Uzbekistan that was recently mapped using LiDAR technology, was the lost Silk-Road city of Marsmanda.
Another speaker — a sleep researcher — posed a provocative question. Is being awake our normal state, and we sleep to recuperate, or is our normal state sleeping, and we wake to obtain the food and other resources we need to keep sleeping? As I listened, I thought of our family dog, who was no doubt sleeping happily as she waited for me at home.
Large Language Models (LLMs) in education were a big topic
The last Sci Foo unconference I was invited to was pre-COVID and the big topic was the replication crisis that was undermining the credibility of science and scientists. This year, the big topics were: (a) reduced federal support for science; and (b) ChatGPT and other large language models (LLMs).
The people at the conference were generally so well-established and widely respected that their jobs were safe. But many lamented having to shut down projects and fire assistants, who were having little success finding new jobs.
As for LLMs, the attendees as a whole are far less skeptical than I am. Over and over, I heard a chemist, marine biologist, or other researcher describe an ambitious moonshot dream (such as more efficient use of groundwater in parched lands) and then speculate on using AI to realize the dream.
For example, the internet is polluted with garbage and scientists know all too well that garbage in = garbage out. One researcher’s dream was to use an LLM to create an international repository of “clean” data that researchers all over the world could rely on. As if an LLM that had been trained on the Internet swamp could distinguish between good data and bad data.
LLMs hallucinate because they cannot tell truth from untruth. I personally have dozens of examples of prompts where LLMs were asked assess specific data and failed badly.
Perhaps even worse, the creation of a large, certified “clean” database will surely encourage researchers to engage in unrestrained data mining, looking for patterns that they can use to generate publications. They will surely find a treasure trove of useless patterns, continuing to fuel the still-ongoing replication crisis.
One exception was a session where the organizer argued that computers do not have consciousness. His reasoning was based on physics but his most compelling argument was a video of a wooden Turing machine that, in theory, is capable of doing anything that can be done by modern computers. How, he asked, can we seriously assert that dowels, levers, and ball bearings have consciousness?
On the other hand, I was frustrated (but perhaps frustration is an inevitable part of growing old) by the seeming naiveté of the discussion in a session on the potential use of LLMs as personal “tutors.” Almost everyone was either a researcher in private industry or working for a graduate school with highly motivated students. They were almost all excited about the possibility of students using LLMs to help them learn.
All the evidence so far is that most college students are not nearly as motivated as the discussants assumed. There are a few students who, out of pride, do not use LLMs at all. There are others who use LLMs to fact check (!) or to make editorial suggestions. Most students use LLMs to do their course work for them — to do homework, write papers, and take online tests. Some make perfunctory changes to LLM-generated papers to avoid LLM-detection algorithms. Others don’t even bother to read the papers they submit. Just hand it in and hope that the instructor will be unable or too busy to identify the paper as LLM-generated.
The COVID shutdown of in-person instruction accelerated the shift towards online “instruction.” Professors appreciate the drastically reduced demands on their time. Administrators appreciate the lower costs. Students appreciate the fact that they can cut-and-paste answers. Many scarcely read the questions and answers that they are cutting and pasting.
Some misguided students celebrate their cunning in cheating their way through college. I say “misguided” because they are really cheating themselves. The job market for recent college grads is terrible and one explanation is that Gen-Zers are viewed by employers as entitled and uneducated.
LLMs and social inequality
The sad reality is that LLMs are likely to make the terrible inequality in our nation even worse. Motivated students who go to elite schools that can afford to offer small classes with personalized instruction that helps develop critical thinking and communication skills will thrive after college. Students going to schools that offer anonymous instruction to large groups, with papers and tests that can be offloaded to LLMs, will flounder after graduation.
Jeff Hammerbacher, an early Facebook employee, once lamented that “The best minds of my generation are thinking about how to make people click ads. That sucks.” We can now add to the lament: intelligent, hard-working people figuring out ways to invade our privacy, sow disinformation, raze education, and hook people on social media and AI-companions.
Sci Foo was inspiring because it showcased so many serious people doing serious things enthusiastically. It was heartening that they were doing it for the love of science, not because they wanted to make money exploiting human weaknesses and addictions.
