Sunday, May 21, 2023

Math Researchers Reveal How to Pair Students for Maximal Learning in Group Settings





In elementary school, you might've been placed into a reading or spelling group with other kids who could parse the same length words as you. Higher-skilled groups received different assignments than lower-skilled, and each was tailored to meet the students where they were at. But is this the best way to maximize learning for everyone? Would mixing in stronger readers with weaker ones be more beneficial? Researchers at the University of Rochester and the University of Nevada came together to solve this problem by discovering the mathematically optimal way to create student groups for overall learning gains.

The research team—composed of two directors of Rochester's Center for Health and Technology and an education professor—began their research with several basic assumptions. They assumed multiple groups would be made from the pool of students, each would have different skill levels, an optimal teaching environment exists where a student is taught at their skill level, and the “optimal grouping system” maximizes the collective benefit for all students as its goal.

They then created equations to model learning under grouping theory. Like-skill tiered grouping strategies produced groups with similar skill levels within said group. A cross-sectional grouping strategy mixed the skill level so that it was comparable across groups when assessed in totality. The former turned out to be the optimal solution, so maybe your old spelling group was the right way to teach. These findings were detailed in the journal Education Practice and Theory.

“We showed that, mathematically speaking, grouping individuals with similar skill levels maximizes the total learning of all individuals collectively,” paper author Chad Heatwole says. “If one puts like-skilled students together, instructors can teach at a level that is not too advanced or trivial for the students and optimize the overall learning of all students collectively regardless of the group.” All students will benefit, but of course, this assumes that a teacher provides equally effective teaching to all groups.

The math is there, but what if the goals are different? Adjustments would have to be made. What if coaches were trying to cull one Olympic athlete from a promising group of players? “In this latter case, you would design the coaching and train the other players for the benefit or growth of one player,” Heatwole says. “It might mean no one else benefits, while one person benefits to the highest degree. But that’s not how our approach was designed.” You could do the math, however. “That’s the beautiful part of this,” he says. “We’re just laying down the facts and saying these are the assumptions, this is the mathematical approach, and this is what the math shows. This is a practical example of how math and science can help solve age-old questions and facilitate the learning, growth, and potential of all parties.”

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

New Math Research Group Reflects a Schism in the Field







A new organization called the Association for Mathematical Research (AMR) has ignited fierce debates in the math research and education communities since it was launched last October. Its stated mission is “to support mathematical research and scholarship”—a goal similar to that proclaimed by two long-standing groups: the American Mathematical Society (AMS) and the Mathematical Association of America (MAA). In recent years the latter two have initiated projects to address racial, gender and other inequities within the field. The AMR claims to have no position on social justice issues, and critics see its silence on those topics as part of a backlash against inclusivity efforts. Some of the new group’s leaders have also spoken out in the past against certain endeavors to diversify mathematics. The controversy reflects a growing division between researchers who want to keep scientific and mathematical pursuits separate from social issues that they see as irrelevant to research and those who say even pure mathematics cannot be considered separately from the racism and sexism in its culture.

With bias, harassment and exclusion widely acknowledged to exist within the mathematics community, many find it dubious that a professional organization could take no stance on inequity while purporting to serve the needs of mathematicians from all backgrounds. “It’s a hard time to be a mathematician,” says Piper H, a mathematician at the University of Toronto. In 2019 less than 1 percent of doctorates were awarded to Black mathematicians, and just 29 percent were awarded to women.

Joel Hass, a mathematician at the University of California, Davis, and current president of the AMR, describes the group as “definitely focused on being inclusive.” He adds that the AMR “welcomes all to join us in supporting mathematical research and scholarship. In early 2022 we plan to open membership to anyone in the world who wishes to join us. There will be no fees or dues. By removing financial barriers to entry, we will make it easier to have participation from anyone across the world. Mathematical research is a truly global endeavor that transcends nation, creed and culture.”

The AMR has presented itself as neutral on social issues. An invitation letter sent to potential founding members of the organization states, “Though individual members may be active in educational, social, or political issues related to the profession, the AMR intends to focus exclusively on matters of research and scholarship.”

Louigi Addario-Berry, a mathematician at McGill University in Montreal, wrote about the AMR on his blog. He told Scientific American he is speaking up because “I think this is an organization whose existence, development and flourishing will hurt a lot of members of the mathematical community who I respect. It is being founded by people who have publicly stated views I find harmful—both hurtful to me as an individual and detrimental to the creation of an inclusive and welcoming mathematical community.”

Hass responded in a statement to Scientific American: “The focus of the AMR is on supporting mathematical research and this goal benefits all members of the mathematics community.” But Addario-Berry questions how the AMR can be neutral on social justice issues when some of its leaders have previously taken strong public stances on some of these topics.

Abigail Thompson is a mathematician at U.C. Davis and current secretary of the AMR. In December 2019, nearly a year after she began her term as a vice president of the AMS, which will end after this month, she wrote an opinion piece opposing the increasingly common practice of asking university faculty candidates to write diversity statements during the hiring process. These statements are meant to demonstrate a prospective hire’s experience with and commitment to supporting a diverse, inclusive environment within a mathematics department. Thompson compared them to McCarthy-era loyalty oaths.

Her piece was published in the Notices of the American Mathematical Society, and created such a stir that the journal later published 25 pages of responses to it—a mix of negative and positive. (Disclosure: The author of this Scientific American article wrote two unrelated articles for the Notices of the AMS last year.)

Among the responses in the Notices of the AMS were three open letters that were each signed by hundreds of people. One of those letters, which had more than 600 signatures, opposed Thompson’s position and the journal’s decision to publish her article.

Another, which had more than 200 signatures, said, “We applaud Abigail Thompson for her courageous leadership in bringing this issue to the attention of the broader Mathematics Community.” And it described mandatory diversity statements as being among “mistakes to avoid.” Several members of the AMR’s current board of directors signed that letter.

The third letter, which had more than 800 signatures, including most of the members of the AMR’s current board of directors, expressed concerns about the backlash to her piece. Some researchers had advocated telling students not to apply to Thompson’s department at U.C. Davis because of her stance, for instance. The letter stated, “Regardless of where anyone stands on the issue of whether diversity statements are a fair or effective means to further diversity aims, we should agree that this attempt to silence opinions is damaging to the profession.”

Another AMR founding member and a member of its board of directors, Robion “Rob” Kirby, is a mathematician at the University of California, Berkeley. In a post entitled “Sexism in Mathematics???” on his Web site, he wrote, “People who say that women can’t do math as well as men are often called sexist, but it is worth remembering that some evidence exists and the topic is a legimate [sic] one, although Miss Manners might not endorse it.”

Hass, Thompson, Kirby and some other members of the AMR signed a July 13, 2021, open letter opposing potential changes to California’s state math curriculum framework for K–12 public schools. The changes “are meant to address ways curriculum can meet the needs of as many students as possible, making math more accessible,” according to a statement on the California Department of Education’s Web site. But the letter Hass, Thompson and Kirby signed argues that the new curriculum “distracts from actual mathematics by having teachers insert ‘environmental and social justice’ into the math curriculum.” And it states, “We believe infusing mathematics with political rhetoric is alien to mathematics as a discipline, and will do lasting damage—including making math dramatically harder for students whose first language is not English.”

The AMS and the MAA have publicly acknowledged the need to work toward a more inclusive mathematical community. Last year an AMS task force released a 68-page report that, in the organization’s words, details “the historical role of the AMS in racial discrimination; and recommends actions for the AMS to take to rectify systemic inequities in the mathematics community.” In 2020 an MAA committee stated that the mathematics community must “actively work to become anti-racist” and “hold ourselves and our academic institutions accountable for the continued oppression of Black students, staff, and faculty.” It also addressed Black mathematicians specifically, saying, “We are actively failing you at every turn as a society and as a mathematics community. We kneel together with you. #BlackLivesMatter.”

In contrast, the AMR has not released any official statements about injustice. “I am supposed to believe, in the year 2021, that this omission is not itself an act of racism?” asks Piper H, who spoke to Scientific American late last year. “How am I, as a 40-year-old Black American mathematician, parent, and person who has paid a bit of attention to American history and American present, supposed to believe that AMR’s refusal to address the actual obstacles that real mathematicians face to doing mathematical research and scholarship is anything other than an insult and a mockery?”

Hass denies that the AMR’s current silence on diversity, equity and inclusion in the field is a message. “Our membership and planned activities will be open to anyone and everyone,” he says. “The AMR welcomes all who want to join our mission of advancing mathematical research and scholarship. We are broadening opportunities around the world for people to engage in mathematical research.”

“It’s not just a coincidence that the AMR was founded on the heels of a greater push for diversity within the AMS,” wrote Lee Melvin Peralta, a mathematics education graduate student at Michigan State University, in the November 16, 2021, newsletter of the Global Math Department, an organization of math educators. The AMR, Peralta added, “seems more like a separatist organization for those people who are striving for some kind of ‘purity’ within mathematics away from ‘impure’ considerations of race, gender, class, ability, sexual orientation, and socioeconomic status (among others).”

Hass denies that the AMR’s founding had anything to do with the antiracism push at the AMS or the MAA. “The changes in the research environment caused by the COVID pandemic revealed new opportunities for the development and communication of mathematical research, allowing for incorporation of new technologies and international activities,” he says. “We felt there was room for a new organization that would explore these.” Hass adds that “the AMS and MAA are wonderful organizations that we hope to work with, along with other organizations such as SIAM [Society for Industrial and Applied Mathematics], ACM [Association for Computing Machinery] and many non-U.S.-based groups.”

Some of the AMR’s founding members have left the organization amid the controversy. “To create an organization to do something positive requires the trust and goodwill of the community that it wants to affect. And this is something that the AMR does not have at this point,” wrote Daniel Krashen, a mathematician at the University of Pennsylvania, in a November 14, 2021, Twitter thread. “I have no desire to negatively impact the mathematical community by my actions and words. I see that some people feel less safe and less heard by my actions, and for this I apologize. I have decided to withdraw my membership.”

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Sunday, May 14, 2023

Math model predicts several useful new drug combinations that may help treat heart attacks


Researchers used mice to develop a mathematical model of a myocardial infarction, popularly known as a heart attack.

The new model predicts several useful new drug combinations that may one day help treat heart attacks, according to researchers at The Ohio State University.

Typically caused by blockages in the coronary arteries -; or the vessels that supply blood to the heart -; these cardiovascular events are experienced by more than 800,000 Americans every year, and about 30% end up dying. But even for those who survive, the damage these attacks inflict on the muscles of the heart is permanent and can lead to dangerous inflammation in the affected areas of the heart.

Treatment to restore blood flow to these blocked passages of the heart often includes surgery and drugs, or what's known as reperfusion therapy. Nicolae Moise, lead author of the study and a postdoctoral researcher in biomedical engineering at Ohio State, said the study uses mathematical algorithms to assess the efficacy of the drugs used to combat the potentially lethal inflammation many patients experience in the aftermath of an attack.

"Biology and medicine are starting to become more mathematical," Moise said. "There's so much data that you need to start integrating it into some kind of framework." While Moise has worked on other mathematical models of animal hearts, he said that the framework detailed in the current paper is the most detailed schematic of myocardial infarctions in mice ever made.

The research is published in the Journal of Theoretical Biology.

Represented by a series of differential equations, the model Moise's team created was made using data from previous animal studies. In medicine, differential equations are often used to monitor the growth of diseases in graph form.

But this study chose to model how certain immune cells like myocytes, neutrophils and macrophages -; cells imperative to fighting infection and combating necrosis (toxic injury to the heart) -; react to four different immunomodulatory drugs over a period of one month. These drugs are designed to suppress the immune system so that it doesn't cause as much damaging inflammation in parts of the heart that were damaged.

This research focused on the drugs' efficacy an hour after the mice were treated.

Their findings showed that certain combinations of these drug inhibitors were more efficient at reducing inflammation than others. "In medicine, math and equations can be used to describe these systems," Moise said. "You just need to observe, and you'll find rules and a coherent story between them.

"With the therapies that we're investigating in our model, we can make the patient outcome better, even with the best available medical care," he said.

Depending on their health beforehand, it can take a person anywhere from six to eight months to heal from a heart attack. The quality of care patients receive in those first few weeks could set the tone for how long their road to recovery will be.

Because Moise's simulation is purely theoretical, it won't lead to improved therapies anytime soon. More precise mouse data is needed before their work can become an asset to other scientists, but Moise said he does envision the model as a potential tool in the fight against the ravages of heart disease.

The co-author of the study was Avner Friedman, professor of mathematics at Ohio State. This research was supported by Ohio State's Mathematical Biosciences Institute and the National Science Foundation.



3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs



US teens say they have new proof for 2,000-year-old mathematical theorem


New Orleans students Calcea Johnson and Ne’Kiya Jackson recently presented their findings on the Pythagorean theorem



Two New Orleans high school seniors who say they have proven Pythagoras’s theorem by using trigonometry – which academics for two millennia have thought to be impossible – are being encouraged by a prominent US mathematical research organization to submit their work to a peer-reviewed journal.

Calcea Johnson and Ne’Kiya Jackson, who are students of St Mary’s Academy, recently gave a presentation of their findings at the American Mathematical Society south-eastern chapter’s semi-annual meeting in Georgia.

They were reportedly the only two high schoolers to give presentations at the meeting attended by math researchers from institutions including the universities of Alabama, Georgia, Louisiana State, Ohio State, Oklahoma and Texas Tech. And they spoke about how they had discovered a new proof for the Pythagorean theorem.

The 2,000-year-old theorem established that the sum of the squares of a right triangle’s two shorter sides equals the square of the hypotenuse – the third, longest side opposite the shape’s right angle. Legions of schoolchildren have learned the notation summarizing the theorem in their geometry classes: a2+b2=c2.

As mentioned in the abstract of Johnson and Jackson’s 18 March mathematical society presentation, trigonometry – the study of triangles – depends on the theorem. And since that particular field of study was discovered, mathematicians have maintained that any alleged proof of the Pythagorean theorem which uses trigonometry constitutes a logical fallacy known as circular reasoning, a term used when someone tries to validate an idea with the idea itself.

Johnson and Jackson’s abstract adds that the book with the largest known collection of proofs for the theorem – Elisha Loomis’s The Pythagorean Proposition – “flatly states that ‘there are no trigonometric proofs because all the fundamental formulae of trigonometry are themselves based upon the truth of the Pythagorean theorem’.”

But, the abstract counters, “that isn’t quite true”. The pair asserts: “We present a new proof of Pythagoras’s Theorem which is based on a fundamental result in trigonometry – the Law of Sines – and we show that the proof is independent of the Pythagorean trig identity sin2x+cos2x=1.” In short, they could prove the theorem using trigonometry and without resorting to circular reasoning.

Johnson told the New Orleans television news station WWL it was an “unparalleled feeling” to present her and Jackson’s work alongside university researchers.


“There’s nothing like it – being able to do something that people don’t think that young people can do,” Johnson said to the station. “You don’t see kids like us doing this – it’s usually, like, you have to be an adult to do this.”

Alluding to how St Mary’s slogan is “No excellence without hard labor,” the two students credited their teachers at the all-girls school in New Orleans’s Plum Orchard neighborhood for challenging them to accomplish something which mathematicians thought was not possible.

“We have really great teachers,” Jackson said to WWL during an interview published Thursday.

WWL reported that Jackson and Johnson are on pace to graduate this spring, and they intend to pursue careers in environmental engineering as well as biochemistry.

St Mary’s Academy administrators did not immediately respond to a request for comment on Friday. Prominent alumnae of the school include judge Dana Douglas, who is the first Black woman to serve on the bench of the federal fifth circuit court of appeals, and renowned restaurateur Leah Chase.

Catherine Roberts, executive director for the American Mathematical Society, said she encouraged the St Mary’s students to see about getting their work examined by a peer-reviewed journal, even at their relatively young age.

“Members of our community can examine their results to determine whether their proof is a correct contribution to the mathematics literature,” said Roberts, whose group hosts scientific meetings and publishes research journals.

Roberts also said American Mathematical Society members “celebrate these early career mathematicians for sharing their work with the wider mathematics community”.

“We encourage them to continue their studies in mathematics,” Roberts added.
3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs



To Teach Computers Math, Researchers Merge AI Approaches


The world has learned two things in the past few months about large language models (LLMs) — the computational engines that power programs such as ChatGPT and Dall·E. The first is that these models appear to have the intelligence and creativity of a human. They offer detailed and lucid responses to written questions, or generate beguiling images from just a few words of text.

The second thing is that they are untrustworthy. They sometimes make illogical statements, or confidently pronounce falsehoods as fact.

“They will talk about unicorns, but then forget that they have one horn, or they’ll tell you a story, then change details throughout,” said Jason Rute of IBM Research.

These are more than just bugs — they demonstrate that LLMs struggle to recognize their mistakes, which limits their performance. This problem is not inherent in artificial intelligence systems. Machine learning models based on a technique called reinforcement learning allow computers to learn from their mistakes to become prodigies at games like chess and Go. While these models are typically more limited in their ability, they represent a kind of learning that LLMs still haven’t mastered.

“We don’t want to create a language model that just talks like a human,” said Yuhuai (Tony) Wu of Google AI. “We want it to understand what it’s talking about.”

Wu is a co-author on two recent papers that suggest a way to achieve that. At first glance, they’re about a very specific application: training AI systems to do math. The first paper describes teaching an LLM to translate ordinary math statements into formal code that a computer can run and check. The second trained an LLM not just to understand natural-language math problems but to actually solve them, using a system called Minerva.

Together, the papers suggest the shape of future AI design, where LLMs can learn to reason via mathematical thinking.

“You have things like deep learning, reinforcement learning, AlphaGo, and now language models,” said Siddhartha Gadgil, a mathematician at the Indian Institute of Science in Bangalore who works with AI math systems. “The technology is growing in many different directions, and they all can work together.”
Not-So-Simple Translations

For decades, mathematicians have been translating proofs into computer code, a process called formalization. The appeal is straightforward: If you write a proof as code, and a computer runs the code without errors, you know that the proof is correct. But formalizing a single proof can take hundreds or thousands of hours.


Over the last five years, AI researchers have started to teach LLMs to automatically formalize, or “autoformalize,” mathematical statements into the “formal language” of computer code. LLMs can already translate one natural language into another, such as from French to English. But translating from math to code is a harder challenge. There are far fewer example translations with which to train an LLM, for example, and formal languages don’t always contain all the vocabulary necessary.

“When you translate the word ‘cheese’ from English to French, there is a French word for cheese,” Rute said. “The problem is in mathematics, there isn’t even the right concept in the formal language.”

That’s why the seven authors of the first paper, with a mix of academic and industry affiliations, chose to autoformalize short mathematical statements rather than entire proofs. The researchers worked primarily with an LLM called Codex, which is based on GPT-3 (a predecessor of ChatGPT) but has additional training on technical material from sources like GitHub. To get Codex to understand math well enough to autoformalize, they provided it with just two examples of natural-language math problems and their formal code translations.

After that brief tutorial, they fed Codex the natural-language statements of nearly 4,000 math problems from high school competitions. Its performance at first might seem underwhelming: Codex translated them into the language of a mathematics program called Isabelle/HOL with an accuracy rate of just under 30%. When it failed, it made up terms to fill gaps in its translation lexicon.


“Sometimes it just doesn’t know the word it needs to know — what the Isabelle name for ‘prime number’ is, or the Isabelle name for ‘factorial’ is — and it just makes it up, which is the biggest problem with these models,” Rute said. “They do a lot of guessing.”

But for the researchers, the important thing was not that Codex failed 70% of the time; it was that it managed to succeed 30% of the time after seeing such a small number of examples.

“They can do all these different tasks with only a few demonstrations,” said Wenda Li, a computer scientist at the University of Cambridge and a co-author of the work.

Li and his co-authors see the result as representative of the kind of latent capacities LLMs can acquire with enough general training data. Prior to this research, Codex had never tried to translate between natural language and formal math code. But Codex was familiar with code from its training on GitHub, and with natural-language mathematics from the internet. To build on that base, the researchers only had to show it a few examples of what they wanted, and Codex could start connecting the dots.

“In many ways what’s amazing about that paper is [the authors] didn’t do much,” Rute said. “These models had this natural ability to do this.”

Researchers saw the same thing happen when they tried to teach LLMs not only how to translate math problems, but how to solve them.
Minerva’s Math

The second paper, though independent of the earlier autoformalization work, has a similar flavor. The team of researchers, based at Google, trained an LLM to answer, in detail, high school competition-level math questions such as “A line parallel to y = 4x + 6 passes through (5, 10). What is the y-coordinate of the point where this line crosses the y-axis?”

The authors started with an LLM called PaLM that had been trained on general natural-language content, similar to GPT-3. Then they trained it on mathematical material like arxiv.org pages and other technical material, mimicking Codex’s origins. They named this augmented model Minerva.


The researchers showed Minerva four examples of what they wanted. In this case, that meant step-by-step solutions to natural-language math problems.

Then they tested the model on a range of quantitative reasoning questions. Minerva’s performance varied by subject: It answered questions correctly a little better than half the time for some topics (like algebra), and a little less than half the time for others (like geometry).

One concern the authors had — a common one in many areas of AI research — was that Minerva answered questions correctly only because it had already seen them, or similar ones, in its training data. This issue is referred to as “pollution,” and it makes it hard to know whether a model is truly solving problems or merely copying someone else’s work.

“There is so much data in these models that unless you’re trying to avoid putting some data in the training set, if it’s a standard problem, it’s very likely it’s seen it,” Rute said.

To guard against this possibility, the researchers had Minerva take the 2022 National Math Exam from Poland, which came out after Minerva’s training data was set. The system got 65% of the questions right, a decent score for a real student, and a particularly good one for an LLM, Rute said. Again, the positive results after so few examples suggested an inherent ability for well-trained models to take on such tasks.

“This is a lesson we keep learning in deep learning, that scale helps surprisingly well with many tasks,” said Guy Gur-Ari, a researcher formerly at Google and a co-author of the paper.

The researchers also learned ways to boost Minerva’s performance. For example, in a technique called majority voting, Minerva solved the same problem multiple times, counted its various results, and designated its final answer as whatever had come up most often (since there’s only one right answer, but so many possible wrong ones). Doing this increased its score on certain problems from 33% to 50%.

Also important was teaching Minerva to break its solution into a series of steps, a method called chain-of-thought prompting. This had the same benefits for Minerva that it does for students: It forced the model to slow down before producing an answer and allowed it to devote more computational time to each part of the task.

“If you ask a language model to explain step by step, the accuracy goes up immensely,” Gadgil said.
The Bridge Forms

While impressive, the Minerva work came with a substantial caveat, which the authors also noted: Minerva has no way of automatically verifying whether it has answered a question correctly. And even if it did answer a question correctly, it can’t check that the steps it followed to get there were valid.

“It sometimes has false positives, giving specious reasons for correct answers,” Gadgil said.

In other words, Minerva can show its work, but it can’t check its work, which means it needs to rely on human feedback to get better — a slow process that may put a cap on how good it can ever get.

“I really doubt that approach can scale up to complicated problems,” said Christian Szegedy, an AI researcher at Google and a co-author of the earlier paper.

Instead, the researchers behind both papers hope to begin teaching machines mathematics using the same techniques that have allowed the machines to get good at games. The world is awash in math problems, which could serve as training fodder for systems like Minerva, but it can’t recognize a “good” move in math, the way AlphaGo knows when it’s played well at Go.

“On the one side, if you work on natural language or Minerva type of reasoning, there’s a lot of data out there, the whole internet of mathematics, but essentially you can’t do reinforcement learning with it,” Wu said. On the other side, “proof assistants provide a grounded environment but have little data to train on. We need some kind of bridge to go from one side to the other.”

Autoformalization is that bridge. Improvements in autoformalization could help mathematicians automate aspects of the way they write proofs and verify that their work is correct.

By combining the advancements of the two papers, systems like Minerva could first autoformalize natural-language math problems, then solve them and check their work using a proof assistant like Isabelle/HOL. This instant check would provide the feedback necessary for reinforcement learning, allowing these programs to learn from their mistakes. Finally, they’d arrive at a provably correct answer, with an accompanying list of logical steps — effectively combining the power of LLMs and reinforcement learning.

AI researchers have even broader goals in mind. They view mathematics as the perfect proving ground for developing AI reasoning skills, because it’s arguably the hardest reasoning task of all. If a machine can reason effectively about mathematics, the thinking goes, it should naturally acquire other skills, like the ability to write computer code or offer medical diagnoses — and maybe even to root out those inconsistent details in a story about unicorns.



3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Saturday, May 13, 2023

Our Brains Have Specific ‘Math Neurons,’ New Study Finds





It turns out that everyone has what it takes to do maths. In a new study published in Current Biology last week, scientists identified neurons in the brain that fire specifically for different math operations. There is one set that lights up during additions, and another during subtractions.

What’s more, the neurons activate regardless of how the operation is signaled to the brain — whether it is through symbols or through written instructions. Previously, math calculations have remained a mystery — more specifically, we know that we can do this, but we didn’t exactly know what happened in our brains while we did this.

The study included five women and four men, and researchers attached electrodes to their brains’ temporal lobes to study neural activity. While performing arithmetic tasks, the electrodes picked up neural activity from specific neurons — more significantly, different ones were involved for different tasks.

“For example, when subjects were asked to calculate ‘5 and 3’, their addition neurons sprang back into action; whereas for ‘7 less 4,’ their subtraction neurons did,” Esther Kutter, one of the study’s authors, told Science Daily.

The implications are significant — it means that we come prewired to perform mathematical calculations, and that our brain cells come encoded with these specific instructions.

The new research also expands our understanding of which parts of the brain are involved in mathematical calculations. While previously considered the domain of the prefrontal lobe alone, the temporal lobe is also implicated in these new findings. The neurons themselves were dubbed “rule-selective” neurons, for the way in which they responded only to specific arithmetic rules.


We’ve known for a while that brain geography matters for someone’s ability to perform specific mathematical tasks. While multiple regions of our brains are simultaneously involved in cognitive tasks, “It’s about multiple regions working in concert. The better they work in concert, the more essentially they speak to each other, the stronger the gains in numerical abilities,” according to a neuroscientist Daniel Ansari, who was commenting on a pervious study.

Moreover, previous studies have highlighted how understanding more about how our brains work can help us address neurodevelopment problems better, according to researchers. Studies like these, in other words, aren’t meant to be deterministic.

Further, past research has also corroborated the fact that different types of math use different parts of the brain. With the help of fMRI scans, many studies have tried to understand what exactly happens in someone’s brain while performing math, this is one of the first to pinpoint the exact neurons involved. The more we know about how our brains do maths, the more we can fine-tune how we teach math to children.

An MIT study, for instance, inferred that rote arithmetic learning doesn’t enrich mathematical intuition, because they don’t involve the parts of our brain that are specifically designed to develop what scientists have called the “number sense.”

Moreover, bilingual children may also take time switching from the language of instruction to their native language while performing math, the MIT study warns. Overall, many studies together show that all children possess the means to encode mathematical instructions; how they are relayed may thus matter.

“This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers,” Florian Mormann from the University Hospital Bonn and first author of the present study, said.

3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Mathematicians Discovered a New Kind of Prime Number






Digitally delicate prime numbers become composite with this one weird trick.
Math researchers proved these primes exist using the bucket proof method.
There are no known examples so far, but mathematicians are hopeful.


In new research, mathematicians have revealed a new category of “digitally delicate” prime numbers. These infinitely long primes turn back to composites faster than Cinderella at midnight with a change of any individual digit.
➡ You think math is badass. So do we. Let’s nerd out over numbers together.

Digitally delicate primes have infinite digits, and changing any digit to any other value bears a composite number outcome instead. To use a more bite-size example, consider 101, which is a prime. Change the digits to 201, 102, or 111, and you have values that are divisible by 3 and therefore compound numbers.


This idea is decades old, so what’s new? Now, mathematicians from the University of South Carolina have established an even more specific niche of the digitally delicate primes: widely digitally delicate primes. These are primes with added, infinite “leading zeros,” which don’t change the original prime, but make a difference as you change the 0s into other digits to test for delicacy.

So instead of 101, consider 000101. That value is prime, and the zeros are just there for show, basically. But if you change the zeros, like 000101 to 100101, now you have a composite number that’s divisible by 3. The mathematicians believe there are infinite widely digitally delicate primes, but so far, they can’t come up with a single real example. They’ve tested all the primes up to 1,000,000,000 by adding leading zeros and doing the math.

South Carolina math professor Michael Filaseta and former graduate student Jeremiah Southwick worked together on the widely digitally delicate number research, publishing their findings in Mathematics of Computation and arXiv. Even without specific examples, they proved the numbers exist in base 10 (meaning numbers that use our 0-9 counting system; compare with binary, base 2, with just 0 and 1) and that there are infinitely many.


The proof itself relies on a kind of logic that’s like the simple division rules on steroids. Certain families of numbers, like those that contain 9s or whose sum adds up to a certain amount, can be blanket proven and then assigned to separate “buckets.” The more buckets there are, the more of the whole gigantic set of integer values is “covered” by the proof.

“The situation involving widely digitally delicate primes is more complicated, of course,” Quanta’s Steve Nadis reports. “You’ll need a lot more buckets, something on the order of 1,025,000, and in one of those buckets every prime number is guaranteed to become composite if any of its digits, including its leading zeros, is increased.”


This isn’t the kind of mathematics that extends to a practical application—it’s number theory that mostly works for its own sake as a way to explore the limits of mathematics. Even since Filaseta and Southwick published their proofs, there are more special cases of digitally delicate numbers in the works as other mathematicians use their research as a jumping off point.

What if you took 101 and inserted a 1 to get 1011? What if you took one digit away to get 10? The possibilities are digitally unlimited.

3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

New algorithm aces university math course questions


Multivariable calculus, differential equations, linear algebra — topics that many MIT students can ace without breaking a sweat — have consistently stumped machine learning models. The best models have only been able to answer elementary or high school-level math questions, and they don’t always find the correct solutions.

Now, a multidisciplinary team of researchers from MIT and elsewhere, led by Iddo Drori, a lecturer in the MIT Department of Electrical Engineering and Computer Science (EECS), has used a neural network model to solve university-level math problems in a few seconds at a human level.

The model also automatically explains solutions and rapidly generates new problems in university math subjects. When the researchers showed these machine-generated questions to university students, the students were unable to tell whether the questions were generated by an algorithm or a human.

This work could be used to streamline content generation for courses, which could be especially useful in large residential courses and massive open online courses (MOOCs) that have thousands of students. The system could also be used as an automated tutor that shows students the steps involved in solving undergraduate math problems.

“We think this will improve higher education,” says Drori, the work’s lead author who is also an adjunct associate professor in the Department of Computer Science at Columbia University, and who will join the faculty at Boston University this summer. “It will help students improve, and it will help teachers create new content, and it could help increase the level of difficulty in some courses. It also allows us to build a graph of questions and courses, which helps us understand the relationship between courses and their pre-requisites, not just by historically contemplating them, but based on data.”

The work is a collaboration including students, researchers, and faculty at MIT, Columbia University, Harvard University, and the University of Waterloo. The senior author is Gilbert Strang, a professor of mathematics at MIT. The research appears this week in the Proceedings of the National Academy of Sciences.A “eureka” moment

Drori and his students and colleagues have been working on this project for nearly two years. They were finding that models pretrained using text only could not do better than 8 percent accuracy on high school math problems, and those using graph neural networks could ace machine learning course questions but would take a week to train.

Then Drori had what he describes as a “eureka” moment: He decided to try taking questions from undergraduate math courses offered by MIT and one from Columbia University that had never been seen before by a model, turning them into programming tasks, and applying techniques known as program synthesis and few-shot learning. Turning a question into a programming task could be as simple as rewriting the question “find the distance between two points” as “write a program that finds the difference between two points,” or providing a few question-program pairs as examples.

Before feeding those programming tasks to a neural network, however, the researchers added a new step that enabled it to vastly outperform their previous attempts.

In the past, they and others who’ve approached this problem have used a neural network, such as GPT-3, that was pretrained on text only, meaning it was shown millions of examples of text to learn the patterns of natural language. This time, they used a neural network pretrained on text that was also “fine-tuned” on code. This network, called Codex, was produced by OpenAI. Fine-tuning is essentially another pretraining step that can improve the performance of a machine-learning model.

The pretrained model was shown millions of examples of code from online repositories. Because this model’s training data included millions of natural language words as well as millions of lines of code, it learns the relationships between pieces of text and pieces of code.

Many math problems can be solved using a computational graph or tree, but it is difficult to turn a problem written in text into this type of representation, Drori explains. Because this model has learned the relationships between text and code, however, it can turn a text question into code, given just a few question-code examples, and then run the code to answer the problem.

“When you just ask a question in text, it is hard for a machine-learning model to come up with an answer, even though the answer may be in the text,” he says. “This work fills in the that missing piece of using code and program synthesis.”

This work is the first to solve undergraduate math problems and moves the needle from 8 percent accuracy to over 80 percent, Drori adds.Adding context

Turning math questions into programming tasks is not always simple, Drori says. Some problems require researchers to add context so the neural network can process the question correctly. A student would pick up this context while taking the course, but a neural network doesn’t have this background knowledge unless the researchers specify it.

For instance, they might need to clarify that the “network” in a question’s text refers to “neural networks” rather than “communications networks.” Or they might need to tell the model which programming package to use. They may also need to provide certain definitions; in a question about poker hands, they may need to tell the model that each deck contains 52 cards.

They automatically feed these programming tasks, with the included context and examples, to the pretrained and fine-tuned neural network, which outputs a program that usually produces the correct answer. It was correct for more than 80 percent of the questions.

The researchers also used their model to generate questions by giving the neural network a series of math problems on a topic and then asking it to create a new one.

“In some topics, it surprised us. For example, there were questions about quantum detection of horizontal and vertical lines, and it generated new questions about quantum detection of diagonal lines. So, it is not just generating new questions by replacing values and variables in the existing questions,” Drori says.

Human-generated vs. machine-generated questions

The researchers tested the machine-generated questions by showing them to university students. The researchers gave students 10 questions from each undergraduate math course in a random order; five were created by humans and five were machine-generated.

Students were unable to tell whether the machine-generated questions were produced by an algorithm or a human, and they gave human-generated and machine-generated questions similar marks for level of difficulty and appropriateness for the course.

Drori is quick to point out that this work is not intended to replace human professors.

“Automation is now at 80 percent, but automation will never be 100 percent accurate. Every time you solve something, someone will come up with a harder question. But this work opens the field for people to start solving harder and harder questions with machine learning. We think it will have a great impact on higher education,” he says.

The team is excited by the success of their approach, and have extended the work to handle math proofs, but there are some limitations they plan to tackle. Currently, the model isn’t able to answer questions with a visual component and cannot solve problems that are computationally intractable due to computational complexity.

In addition to overcoming these hurdles, they are working to scale the model up to hundreds of courses. With those hundreds of courses, they will generate more data that can enhance automation and provide insights into course design and curricula.



3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Friday, May 12, 2023






Researchers at Rochester Institute of Technology have developed MathDeck, an online search interface that allows anyone to easily create, edit and lookup sophisticated math formulas on the computer.

Created by an interdisciplinary team of more than a dozen faculty and students, MathDeck aims to make math notation interactive and easily shareable, rather than an obstacle to mathematical study and exploration. The math-aware search interface is free to the public and available to use at mathdeck.cs.rit.edu.

Researchers said the project stems from a growing public interest in being able to do web searches with math keywords and formulas. However, for many people, it can be difficult to accurately express sophisticated math without an understanding of the scientific markup language LaTeX.

With MathDeck, users can now enter and edit formulas in multiple ways, including handwriting, uploading a typeset formula image and text input using LaTeX. Using image processing and machine learning techniques, the interface is able to recognize formula images and hand-drawn symbols.

“With such a tool in hand, it will be much easier for experts and non-experts to enter complicated formulas and symbols accurately and have the search engines find mathematically relevant answers quickly and effectively,” said Anurag Agarwal, associate professor in RIT’s School of Mathematical Sciences. “It can also help people from different disciplines to collaborate, share their findings and perform searches more productively.”

MathDeck is one piece of a larger project called MathSeer, which is supported by nearly $1,000,000 in funding from the National Science Foundation and the Alfred P. Sloan Foundation. MathSeer is led by Richard Zanibbi, professor of computer science at RIT, Agarwal, Penn State University Professor C. Lee Giles and University of Maryland, College Park Professor Douglas W. Oard.

“The goal of MathSeer is to produce new technologies to provide ‘math search for the masses,’” said Zanibbi, who is also director of RIT’s Document and Pattern Recognition Lab in the Golisano College of Computing and Information Sciences. “This involves creating new search interfaces, AI algorithms for handwritten and image input, and search engine technologies that better support formulas in queries.”

In order to create a useful interface for MathDeck, the team had to better understand user’s search behavior, including how users express their query and what types of documents they are looking for. They also noted that in mathematics, expressions and symbols often have multiple meanings and contexts.

“To tackle these complexities, we used our knowledge and expertise in math to make the system ‘aware’ of the mathematical nuances, so that it can interpret and represent the mathematical connection between the various objects in formulas with high accuracy, thereby resulting in effective search,” Agarwal said.

The interface will also help users save time, because they can save their sessions and favorite formulas. Users can manipulate and save formulas as chips, so they don’t have to re-enter the formula.

“Entering math formulas is a big challenge from the user’s perspective, as math is typically expressed in a two-dimensional space, while typing only produces a sequence of characters,” said Gavin Nishizawa, a computer science master’s student from Aiea, Hawaii, who was lead developer on the project.

MathDeck includes an auto-complete function for formulas and keywords. If users are searching for a popular symbol or formula, they’ll likely find an entity card. The card shows the formula, the name of its associated concept and a brief description.

“In formula search, there are math specific challenges, including ‘equivalent’ formulas with different variable names or terms in another order,” said Nishizawa, who also completed a software engineering degree at RIT in 2018. “For formula autocomplete, MathDeck searches entity cards by recognizing a formula’s structure, passing its structure representation into a neural network, and then producing an embedding vector that is compared against formulas in the entity cards.”

When it comes time to submit a query, users can select from 11 search engines, including standard search engines, like Google, and more math-focused systems, including Wolfram Alpha and Math Stack Exchange.

In the future, Zanibbi said the team plans to extend MathDeck. They are creating techniques to make formulas searchable in large PDF collections and working to improve formula and text search, as well as improving formula recognition in handwriting and images.

Zanibbi, Agarwal, Oard and RIT computing and information sciences Ph.D. student Behrooz Mansouri are also running ARQMath, an international task to benchmark and improve math-aware search technologies.

“There is a lot of complexity around math, so making the use of math more intuitive can help address many problems in math and science,” said Nishizawa. “Research in this area can have a significant positive impact on things like math literacy, understanding mathematical ideas and improving people’s quality of life.”

3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Sunday, May 7, 2023

The myth of the ‘math person’

n the 1970s, Sheila Tobias noticed something peculiar going on in mathematics. In one of her early studies, the graduate of Radcliffe College, self-described “scholar activist,” and author of 14 books, including the 1978 bestseller “Overcoming Math Anxiety,” gave elementary school students a sheet of paper, divided in half. On one side, they worked on a math problem; on the other, they wrote down how the problem made them feel.

“I’m finished,” one wrote. “Nobody else is. I must be wrong.”

Another wrote: “I’m not finished. Everybody else is. I must be wrong.”

Many remembered their parents saying something like, “Nobody in our family is good at math.” Others recalled the shame of standing at a blackboard, failing to solve an equation as their classmates heckled and laughed.

“Math anxiety is a serious handicap,” Tobias wrote in a 1976 article in Ms. magazine describing her findings. “It is handed down from mother to daughter with father’s amused indulgence. (‘Your mother never could balance a checkbook,’ he says fondly.)”

Today, math anxiety still smothers students — especially those who belong to groups historically underrepresented in the field — and there’s more at stake than a balanced checkbook. Threats like climate change, pandemics, and gerrymandering cannot be solved without math.

“You can’t begin to grasp those issues,” Tobias said in a talk at West Virginia University just one year before she died, in 2021, at the age of 86. (The death was not widely reported until a New York Times obituary appeared in September of this year.)

Almost 50 years have passed since Tobias, whose papers are held by Radcliffe’s Schlesinger Library, first described math anxiety’s impact on students — especially young girls and women. And yet, not much has changed. According to the cognitive scientist Sian Beilock’s 2019 Harvard Business Review article, “Americans Need to Get Over Their Fear of Math,” nearly half of first- and second-grade students say they are “moderately nervous” or “very, very nervous” about math, and a quarter of college students report moderate or high levels of math anxiety.

“Hating math seems to bring people together,” said Reshma Menon, a preceptor in Harvard’s mathematics department. “This isn’t just about my students. I’ll meet people at the grocery store or I’ll be in an Uber chatting with the driver. When I tell them I teach math, the immediate response is, ‘Oh my God, I used to hate math in school.’ Math anxiety is worldwide and very, very real.”

Math hatred, or what some call “math trauma,” is like the common cold: ubiquitous, tricky to trace, and hard to treat.

“There’s a genius myth in mathematics,” said Brendan Kelly, director of introductory math at Harvard. “There’s often this perception that success requires some natural ability, some unteachable qualities, some immutable traits.”

When students learn to write stories or play the violin, most don’t expect to replicate Toni Morrison or Niccolò Paganini in their first attempts. No one says, “I’m not a writing person.” But in math, said Allechar Serrano López, also a preceptor in mathematics at Harvard, “It gets decided when they’re literally children if they are going to be math people or if they’re not math people.” And because math is a gateway to almost every other field of science, that early stamp can squeeze students out of the STEM pipeline.

But the genius myth isn’t the only barrier.

Students come to college with vastly different educational backgrounds based on which elementary and secondary schools they attended. Some schools don’t even offer calculus, Menon noted. “During the pandemic, these differences became wider; the disparities are much more evident now than they were before,” she said.

Disparities across schools often disproportionately affect low-income students and students of color. “That divide creates lower confidence among students,” said Menon. “But there’s also the problem, generally, of women, students of color, and nonbinary students feeling like they don’t fit in.”






3rd Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs
























6 Marvelous Math Stories from 2022







Mathematics can be both mind-boggling and illuminating. Though it can require mental gymnastics to follow some recent developments in math research, the effort is often rewarded with fascinating truths. This year math seeped into diverse realms of our lives and the world, showing us that the field affects all of us. Here’s a look at some of 2022’s most captivating math developments, including a mathematical attempt to prove the existence of God, the use of algorithms to help make citizen assemblies more fair, a fun subfield that deals with bendable shapes that helped find “doughnuts in the brain,” and more.

CAN GOD BE PROVED MATHEMATICALLY?

What sounds like an absurd question is actually a starting point for a fascinating history of mathematical attempts to prove or disprove the existence of a divine being. Journalist Manon Bischoff documents efforts by Blaise Pascal, René Descartes, Kurt Gödel and others to investigate the nature of God. Gödel, in fact, wrote an ontological proof in the mid 20th century that attempted to use logic to establish that God exists. Later a computer algorithm determined that his chain of reasoning was unassailable—in other words, God must exist. The caveat, though, is that the proof only works if you accept Gödel’s initial assumptions.
THE ELUSIVE ORIGIN OF ZERO

Humans invented numbers long before they invented a numeral for nothing. The first emergence of the concept of “zero” has long been a historical mystery, with scholars variously identifying the inventors of zero as the ancient residents of South America, China, India and Cambodia, among other locations. Mathematics educator and historian Frank Swetz and mathematician Shaharir bin Mohamad Zain recount their own investigations into the possible origin of zero among early inhabitants of the Indonesian island of Sumatra.

EXTREME NUMBERS GET NEW NAMES

Earth weighs around one ronnagram—that’s 1027 grams—and an electron’s mass is about one quectogram—10−30 gram. The prefixes “ronna” and “quecto,” along with several others denoting especially humongous or minuscule numbers, were recently added to the International System of Units (SI). The decision was approved at the November General Conference on Weights and Measures, held near Paris, in part to handle the growing amounts of data being generated worldwide. Journalist Elizabeth Gibney notes an additional motivation for adding the new prefixes: to prevent unofficial ones, such as “hella” for 1027, from taking hold.
THE MATHEMATICS BEHIND CITIZENS’ ASSEMBLIES

Citizens’ assemblies are randomly chosen groups of citizens whose demographics—age, gender, geography—represent a larger society. They have been used in Europe, Canada, Australia and U.S. states to weigh issues and make policy recommendations on topics such as abortion, carbon emissions and COVID protections. The mathematics of equitably and accurately choosing the members of an assembly turn out to be complex and intriguing. Computer scientist Ariel Procaccia describes his work on an algorithm that can compose a representative group of volunteers in the fairest possible way.
THE EVOLVING QUEST FOR A GRAND UNIFIED THEORY OF MATHEMATICS

An area of mathematics research called the Langlands program has been dubbed “a Grand Unified Theory of Mathematics” because of the connections it makes between many disparate subfields. Based on a set of conjectures by Robert Langlands, a mathematician at the Institute for Advanced Study in Princeton, N.J., the program touches on geometry, number theory and algebra, among many additional ideas. Journalist Rachel Crowell describes how the Langlands program is developing and making links even beyond mathematics to physics and otherrealms of science.


SQUISHY MATH REVEALS DOUGHNUTS IN THE BRAIN

Human brains still have an advantage over computers in identifying certain kinds of patterns that are immediately clear to our eyes but opaque to algorithms. Yet sometimes a set of data is too large for humans to manage the job. Researchers are working on teaching computers better ways of identifying shapes and patterns in data using a tool called topological data analysis, which relies on the mathematics of bendable shapes. Mathematician and journalist Kelsey Houston-Edwards describes how this technique has helped identify doughnut-shaped structures of neurons in the brain, as well as new arrangements of molecules that could be the basis for novel drugs.


3rd Edition of International Conference on Mathematics and Optimization Methods

Intervention based on science of reading and math boosts comprehension and word problem-solving skills New research from the University of ...