Wednesday, June 28, 2023

Journey to infinite possibilities through mathematics exploration


Mathematics provides amazing tools for science and study that has been in existence for centuries! It gives us a better understanding of the world around us, and can help us uncover the mysteries of the universe. An essential part of this discipline is mathematics research. It’s all about proving and creating theories, finding new applications, and exploring unsolved problems. Plus, it helps us to develop new technologies and advance science. Pretty cool, right?

Two winners of the Hang Lung Mathematics Awards (HLMA), Dr. Kero Lau (2004 Bronze Award winner) and Ms. Ewina Pun (2012 Bronze Award winner), recently met with more than 100 secondary students from 50-plus schools in an online sharing session. The two promising young scientists shared their own experiences and thoughts about pursuing careers in science and research in a highly informative session where Professor Shing Yu Leung, Associate Dean of Science at HKUST, served as moderator.

Are memes a helpful way to learn?

Recalling their time in secondary school, both speakers had a keen interest in mathematics, which led them to the world of scientific research. Mathematics is a language used daily in research. Research requires perseverance. Although difficulties in research and calculations may be encountered, they pointed out that students should not be afraid of making mistakes. These experiences may lead to the discovery of something new.

They also encouraged students to exchange ideas with their friends when solving problems. Doing research requires a lot of teamwork, strong communication skills, and cooperative spirit. That’s something you don’t learn by taking exams. Being able to work collaboratively and apply knowledge skilfully to real-life situations requires many soft skills that are not tested in exams.

The biennial HLMA provides opportunities for secondary school students to hone their research skills in this discipline and compete for HK$1 million in prizes.

Students debate whether smaller class sizes are always better

If you want to dive deeper into the fascinating world of mathematics as Kero and Ewina did, research is the way to go! Through research, you can explore infinite possibilities, think critically, create innovative solutions to problems, and learn more about the world and yourself. With the right guidance and resources, you can embark on your own journey of discovery. And who knows what you might come up with!

All interested students are encouraged to take the first step by signing up for 2023 HLMA Student Information Session which will take place virtually on Friday, 24 February from 6-7 pm. For 2023 HLMA registration, please scan the QR code below:

4th Edition of International Conference on Mathematics and Optimization Method

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics

Recounting the History of Math’s Transcendental Numbers




In 1886 the mathematician Leopold Kronecker famously said, “God Himself made the whole numbers — everything else is the work of men.” Indeed, mathematicians have introduced new sets of numbers besides the ones used to count, and they have labored to understand their properties.

Although each type of number has its own fascinating and complicated history, today they are all so familiar that they are taught to schoolchildren. Integers are just the whole numbers, plus the negative whole numbers and zero. Rational numbers are those that can be expressed as a quotient of integers, such as 3, −‍1/2 and 57/22. Their decimal expansions either terminate (−‍1/2 = −‍0.5) or eventually repeat (57/22 = 2.509090909…). That means if a number has decimal digits that go on forever without repeating, it’s irrational. Together the rational and irrational numbers comprise the real numbers. Advanced students learn about the complex numbers, which are formed by combining the real numbers and imaginary numbers; for instance, i=√−1.

One set of numbers, the transcendentals, is not as well known. Paradoxically, these numbers are both plentiful and exceedingly difficult to find. And their history is intertwined with a question that plagued mathematicians for millennia: Using only a compass and a straightedge, can you draw a square with the same area as a given circle? Known as squaring the circle, the question was answered only after the invention of algebra and a deeper understanding of π — the ratio of the circumference of any circle to its diameter.

What does it mean to discover a new set of numbers? Today we say that Hippasus of Metapontum, who lived in approximately the fifth century BCE, discovered irrational numbers. In fact, his discovery was geometric, not arithmetic. He showed that it’s possible to find two line segments, like the side and diagonal of a square, that can’t be divided into parts of equal length. Today we would say that their lengths are not rational multiples of each other. Because the diagonal is √2 times as long as the side, √2 is irrational.

It is impossible to divide the side and diagonal of a square into parts of equal length. Here a length divides the side into 10 equal parts, but the diagonal is divided into 14 equal parts with a small remainder.

In terms of constructions possible with just a compass and straightedge — the mathematical tools of antiquity — if we begin with a unit-length line segment, it’s possible to construct a segment with any positive rational length. However, we can also construct some irrational lengths. For instance, we’ve seen how to make √2; another famous irrational number, the golden ratio, (1+√5)/2, is the diagonal of a regular pentagon with side length 1.

Roughly 2,000 years after the Greeks first posed the question of squaring the circle, René Descartes applied new algebraic techniques to show in his 1637 treatise La Géométrie that the constructible lengths are precisely those that can be expressed using integers and the operations of addition, subtraction, multiplication, division and the calculation of square roots. Notice that all positive rational numbers have this form, as do √2 and the golden ratio. If π could be written in this way, it would finally let geometers square the circle — but π was not so easy to classify.

In the next 200 years, algebra matured significantly, and in 1837 a little-known French mathematician named Pierre Wantzel connected constructible numbers to polynomials — mathematical expressions that involve variables raised to various powers. In particular, he proved that if a length is constructible, then it must also be a root, or value that produces zero, of a certain type of polynomial, namely one that can’t be factored, or simplified, further, and whose degree (the largest exponent of x) is a power of 2 (so 2, 4, 8, 16 and so on).

For instance, √2 and the golden ratio are constructible, and they are roots of the polynomials x2–2 and x2–x–1, respectively. On the other hand, 3√2 is a root of the degree 3 polynomial x3–2, which doesn’t qualify, so it is impossible to construct a segment of this length.

Wantzel used his results to resolve other classical problems by proving that they can’t be solved — it is impossible to trisect some angles, it is impossible to double the cube and it is impossible to construct certain regular polygons. But because the exact nature of π remained a mystery, the question of squaring the circle remained open.

The key to resolving the problem, it turned out, was to cleverly divide the set of complex numbers into two sets, much as earlier generations partitioned the real numbers into rational and irrational numbers. Many complex numbers are the root of some polynomial with integer coefficients; mathematicians call these numbers algebraic. But this isn’t true for all numbers, and these non-algebraic values are called transcendental.

Every rational number is algebraic, and some irrational numbers are too, like 3√2. Even the imaginary number i is algebraic, as it is a root of x2+1.


This diagram shows the relationships between the various kinds of numbers. An irrational number is any real number that is not rational, and a transcendental number is any complex number that is not algebraic.

It was not obvious that transcendental numbers should exist. Moreover, it’s challenging to prove that a given number is transcendental because it requires proving a negative: that it is not the root of any polynomial with integer coefficients.

In 1844, Joseph Liouville found the first one by coming at the problem indirectly. He discovered that irrational algebraic numbers cannot be approximated well by rational numbers. So if he could find a number that was approximated well by fractions with small denominators, it would have to be something else: a transcendental number. He then constructed just such a number.

Liouville’s manufactured number,

L=0.1100010000000000000000010…,

contains only 0s and 1s, with the 1s occurring in certain designated places: the values of n!. So the first 1 is in the first (1!) place, the second is in the second (2!) place, the third is in the sixth (3!) place, and so on. Notice that as a result of his careful construction, 1/10, 11/100, and 110,001/1,000,000 are all very good approximations of L — better than one would expect given the size of their denominators. For instance, the third of these values has 3! (six) decimal digits, 0.110001, but agrees with L for a total of 23 digits, or 4!−1.

Despite L proving that transcendental numbers exist, π does not satisfy Liouville’s criterion (it can’t be well approximated by rational numbers), so its classification remained elusive.

The key breakthrough occurred in 1873, when Charles Hermite devised an ingenious technique to prove that e, the base of the natural logarithm, is transcendental. This was the first non-contrived transcendental number, and nine years later it allowed Ferdinand von Lindemann to extend Hermite’s technique to prove that π is transcendental. In fact he went further, showing that ed is transcendental whenever d is a nonzero algebraic number. Rephrased, this says that if ed is algebraic, then d is either zero or transcendental.

To prove that π is transcendental, Lindemann then made use of what many people view as the most beautiful formula in all of mathematics, Euler’s identity: eπi=−1. Because −‍1 is algebraic, Lindemann’s theorem states that πi is transcendental. And because i is algebraic, π must be transcendental. Thus, a segment of length π is impossible to construct, and it is therefore impossible to square the circle.

Although Lindemann’s result was the end of one story, it was just an early chapter in the story of transcendental numbers. Much still had to be done, especially, as we’ll see, given how prevalent these misfit numbers are.

Shortly after Hermite proved that e was transcendental, Georg Cantor proved that infinity comes in different sizes. The infinity of rational numbers is the same as the infinity of whole numbers. Such sets are called countably infinite. However, the sets of real numbers and irrational numbers are larger; in a sense that Cantor made precise, they are “uncountably” infinite. In the same paper, Cantor proved that although the set of algebraic numbers contains all rational numbers and infinitely many irrational numbers, it is still the smaller, countable size of infinity. Thus, its complement, the transcendental numbers, is uncountably infinite. In other words, the vast majority of real and complex numbers are transcendental.

Yet even by the turn of the 20th century, mathematicians could conclusively identify only a few. In 1900, David Hilbert, one of the most esteemed mathematicians of the era, produced a now-famous list of the 23 most important unsolved problems in mathematics. His seventh problem, which he considered one of the harder ones, was to prove that ab is transcendental when a is algebraic and not equal to zero or 1, and b is an algebraic irrational number.

In 1929, the young Russian mathematician Aleksandr Gelfond proved the special case in which b=±i√r and r is a positive rational number. This also implies that eπ is transcendental, which is surprising because neither e nor π is algebraic, as required by the theorem. However, by cleverly manipulating Euler’s identity again, we see that

eπ=e−iπi = (eπi)−i = (−1)−i.

Shortly afterward, Carl Siegel extended Gelfond’s proof to include values of b that are real quadratic irrational numbers, allowing him to conclude that 2√2 is transcendental. In 1934, Gelfond and Theodor Schneider independently solved the entirety of Hilbert’s problem.

4th Edition of International Conference on Mathematics and Optimization Method

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics

Wednesday, June 21, 2023

Whole College Approach effective in improving maths learning, research project finds



Findings from the pilot year of a programme taking a Whole College Approach (WCA) to improving maths learning indicate that it is an effective means of supporting colleges to improve student attendance and learning experiences. Furthermore, the approach was found to have the potential to bring about sustainable organisational change in the way that colleges organise and manage students’ maths learning.

Delivered in four phases – discovery, planning, intervention and review – the Whole College Approach pilot involved a process of organisational change through which student learning of maths became a shared responsibility and all staff were actively involved in a collaborative effort to improve students’ understanding of the subject. The project began in April 2021 as a strand of the Education and Training Foundation (ETF) Centres for Excellence in Maths (CfEM) programme, funded by the Department for Education, and followed the publication of the Nuffield-funded Mathematics in Further Education Colleges (MiFEC) project (Noyes & Dalby 2017–20).

That work evidenced broad agreement from a cross-section of staff in England’s FE colleges about the importance of maths and students with low attainment improving their mathematics skills. However, it also found that students can receive inconsistent messages, explicitly and implicitly, about the need to engage with mathematics; and that combinations of strategic or operational approaches can produce variations in students’ experiences and sometimes hinder their participation or progress.

In the WCA pilot project, which was delivered by the University of Nottingham’s Centre for Research in Mathematics Education (CRME) on behalf of the ETF, three elements were identified as being effective in guiding and supporting colleges through a process of organisational change:The support and guidance given by each college’s ‘critical friend’ was a key factor in the success. Participating colleges reported that having an external facilitator to work through the self-assessment tasks with them was an important early step. Through meetings with their critical friend, colleges reported that their thinking was challenged. They found the interaction and feedback to be an effective means of support that helped them review and refine their analysis of the problem and develop action plans with more focused and appropriate interventions.
Colleges also agreed that the self-assessment activities were an important element of the programme. The first activity was useful in starting the group thinking about the context in which they were working and its contextual affordances and constraints. This was followed by activities to explore the college culture and use different perspectives to analyse the issues thoroughly. Colleges valued the way these tasks stimulated rich, purposeful discussion about the problems they wanted to address.
Colleges found that the constitution of a cross-college team to collaborate and lead their college WCA was an essential element of the programme. It was important to include representatives from vocational and maths departments, including both managers and teachers, and to secure the active involvement of a senior leader.

“It has been encouraging to see how the WCA programme has helped colleges develop purposeful collaboration between maths and vocational staff and supported the co-design of effective interventions to improve their maths provision. By working across traditional silo-structures and sharing different perspectives, staff have gained a better understanding of the problems and found new ways of tackling key issues such as student motivation and engagement collaboratively.”

Steve Pardoe, Head of Centres for Excellence in Maths at the ETF, said:

“This research project demonstrates that success in FE maths is down to more than just maths teaching. The Whole College Approach has proved to be an effective process for bringing people together from across a college to support improvement processes for maths. In doing so, it has achieved its objectives of translating MiFEC and other related ‘whole organisation’ research into practice; building sector knowledge about WCAs; and developing support mechanisms and producing support material. It has also identified moderating factors that can affect the implementation of the approach, such as college readiness and stability, time pressures and the extra pressure put on staff by the Covid pandemic.”

Case studies of some of the 16 colleges that participated in the project – Harlow College, Leyton Sixth Form College, Stamford College, the Lakes College, Weston College, and Wilberforce Sixth Form College – are available on the ETF website.

For further details of the wider CfEM programme please visit the CfEM resources and evidence hub.

4th Edition of International Conference on Mathematics and Optimization Method

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics

Research Team Uses Math To Help Allocate Resources During Natural Disasters



New research by a team led by an engineering professor at Northeastern University has found that a mathematical model can predict human movement during natural disasters.

The research team looked at events like the COVID-19 pandemic, Hurricane Dorian, and the Kincade Wildfire to predict patterns of human movement and used anonymous information from 90 million Americans to create the mathematical model used during the study.

According to Qi Ryan Wang, an associate professor of environmental and civil engineering at Northeastern University, the findings from this research study can help governments and emergency responders properly allocate resources during a range of disasters.

What Did The Research Team Find?

The research team led by Wang looked at people’s mobility behavior during six major disasters, and their findings discovered a disparity in movement between economic groups. They concluded that those with little means are exposed to more risk during natural disasters like diseases.

For example, during the height of the COVID-19 pandemic, people that lived in poorer neighborhoods left home more frequently because they were essential workers and because they did not have the ability to stock up on water and food for days and weeks like wealthier individuals and families.

These communities also did not have access to emergency generators and other important technologies. In an interview published on Northeastern’s news platform, Wang stated similar mobility patterns were observed during weather-related disasters.

This need for access to food, water, and other supplies during emergencies is one of the reasons experts tell people to keep survival bags in their homes full of non-perishable foods, water bottles, and medical supplies.

According to ExpressVPN’s emergency survival article, people are also meant to store different tech products in their emergency bags, such as satellite phones, medical flash drives, and portable power banks. When you have these bags already in place, you can limit how many times you leave the house for food, water, and essentials.

How Will This Information Help Governments and Emergency Responders?

According to Wang, governments and emergency responders can use this information on human mobility during disasters to better understand resource allocation and which communities need to be helped first. Understanding this will help institutions produce more effective responses to disasters like diseases, earthquakes, and wildfires.

The Northeastern News board also said the research study touched on the concept of temporal decay, which refers to when people’s attention moves away from certain information or situations as time passes. This phenomenon occurred as the COVID-19 pandemic moved into its second year, and governments can remember this when implementing year-long restrictions and regulations.

As per Access Partnership, the annual number of natural disasters is projected to increase by 37% (541 occurrences) by 2025. With rising concerns over floods, earthquakes, sinkholes, and diseases, it is more important than ever that Wang and his research team have provided a human mobility model that will help governments and emergency respondents better allocate resources and supplies during these events.

Highlights from the study found that less wealthy communities are at greater risk of exposure to events like diseases because they don’t have the freedom to work from home or stock up on supplies; this information is crucial if another pandemic ever arises.

4th Edition of International Conference on Mathematics and Optimization Method

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics



National Mathematics Day is observed on December 22 every year. This date marks the birth anniversary of legendary mathematician Srinivasa Ramanujan. In 2012, then Prime Minister Manmohan Singh declared December 22 as National Mathematics Day to honor the life and achievements of Ramanujan.

Hee are 10 points on life and work of the great mathematician:Srinivasa Ramanujan was born on December 22, 1887, in Tamil Nadu’s Erode to a Brahmin Iyengar family. He had developed a liking for mathematics at a very young age, mastering trigonometry at 12 and was eligible for a scholarship at the Government Arts College in Kumbakonam.
He studied at the Government College in Kumbakonam in 1903. Due to his dislike for non-mathematical subjects, he failed exams there. He had enrolled in Madras’ Pachaiyappa College at the age of 14.
In 1912, Ramanujan started working as a clerk in the Madras Port Trust. There, his mathematical genius was recognised by some of his colleagues and one of them referred him to Professor GH Hardy of Trinity College, Cambridge University. He met Hardy in 1913, after which he went to Trinity College.
In 1916, Ramanujan received his Bachelor of Science (BSc) degree. He went on to publish several papers on his subject with Hardy’s help. The two even collaborated on several joint projects.
Ramanujan was elected to the London Mathematical Society in 1917. Next year, he was elected to the prestigious Royal Society for his research on Elliptic Functions and theory of numbers. He was also the first Indian to be elected a Fellow of the Trinity College.
Despite not receiving any formal training in pure maths, Ramanujan made impactful contribution to the discipline in his short life. His areas of work include infinite series, continued fractions, number theory and mathematical analysis.
He also made notable contributions like the hypergeometric series, the Riemann series, the elliptic integrals, the theory of divergent series, and the functional equations of the zeta function. He is said to have discovered his own theorems and independently compiled 3,900 results.
In 1919, Ramanujan returned to India. A year later, on April 26, he breathed his last owing to deteriorating health. He was just 32 years old. His biography ‘The Man Who Knew Infinity’ by Robert Kanigel depicts his life and journey to fame.
A film of the same name was released in 2015 in which British-Indian actor Dev Patel played Ramanujan. The film shed light on Ramanujan’s childhood in India, his time in Britain, and his journey to becoming the great mathematician.
An anecdote from his biography shows Ramanujan's brilliance. In this, GH Hardy said: I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways." Thus, 1729 became the Hardy-Ramanujan number – definitely not the greatest contribution of Ramanujan, but perhaps the easiest one to remember

4th Edition of International Conference on Mathematics and Optimization Method

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics

Monday, June 19, 2023

Whole College Approach effective in improving maths learning, research project finds




Findings from the pilot year of a programme taking a Whole College Approach (WCA) to improving maths learning indicate that it is an effective means of supporting colleges to improve student attendance and learning experiences. Furthermore, the approach was found to have the potential to bring about sustainable organisational change in the way that colleges organise and manage students’ maths learning.

Delivered in four phases – discovery, planning, intervention and review – the Whole College Approach pilot involved a process of organisational change through which student learning of maths became a shared responsibility and all staff were actively involved in a collaborative effort to improve students’ understanding of the subject. The project began in April 2021 as a strand of the Education and Training Foundation (ETF) Centres for Excellence in Maths (CfEM) programme, funded by the Department for Education, and followed the publication of the Nuffield-funded Mathematics in Further Education Colleges (MiFEC) project (Noyes & Dalby 2017–20).

That work evidenced broad agreement from a cross-section of staff in England’s FE colleges about the importance of maths and students with low attainment improving their mathematics skills. However, it also found that students can receive inconsistent messages, explicitly and implicitly, about the need to engage with mathematics; and that combinations of strategic or operational approaches can produce variations in students’ experiences and sometimes hinder their participation or progress.

In the WCA pilot project, which was delivered by the University of Nottingham’s Centre for Research in Mathematics Education (CRME) on behalf of the ETF, three elements were identified as being effective in guiding and supporting colleges through a process of organisational change:The support and guidance given by each college’s ‘critical friend’ was a key factor in the success. Participating colleges reported that having an external facilitator to work through the self-assessment tasks with them was an important early step. Through meetings with their critical friend, colleges reported that their thinking was challenged. They found the interaction and feedback to be an effective means of support that helped them review and refine their analysis of the problem and develop action plans with more focused and appropriate interventions.
Colleges also agreed that the self-assessment activities were an important element of the programme. The first activity was useful in starting the group thinking about the context in which they were working and its contextual affordances and constraints. This was followed by activities to explore the college culture and use different perspectives to analyse the issues thoroughly. Colleges valued the way these tasks stimulated rich, purposeful discussion about the problems they wanted to address.
Colleges found that the constitution of a cross-college team to collaborate and lead their college WCA was an essential element of the programme. It was important to include representatives from vocational and maths departments, including both managers and teachers, and to secure the active involvement of a senior leader.

“It has been encouraging to see how the WCA programme has helped colleges develop purposeful collaboration between maths and vocational staff and supported the co-design of effective interventions to improve their maths provision. By working across traditional silo-structures and sharing different perspectives, staff have gained a better understanding of the problems and found new ways of tackling key issues such as student motivation and engagement collaboratively.”

Steve Pardoe, Head of Centres for Excellence in Maths at the ETF, said:

“This research project demonstrates that success in FE maths is down to more than just maths teaching. The Whole College Approach has proved to be an effective process for bringing people together from across a college to support improvement processes for maths. In doing so, it has achieved its objectives of translating MiFEC and other related ‘whole organisation’ research into practice; building sector knowledge about WCAs; and developing support mechanisms and producing support material. It has also identified moderating factors that can affect the implementation of the approach, such as college readiness and stability, time pressures and the extra pressure put on staff by the Covid pandemic.”
Case studies of some of the 16 colleges that participated in the project – Harlow College, Leyton Sixth Form College, Stamford College, the Lakes College, Weston College, and Wilberforce Sixth Form College – are available on the ETF website

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Automating the math for decision-making under uncertainty




One reason deep learning exploded over the last decade was the availability of programming languages that could automate the math — college-level calculus — that is needed to train each new model. Neural networks are trained by tuning their parameters to try to maximize a score that can be rapidly calculated for training data. The equations used to adjust the parameters in each tuning step used to be derived painstakingly by hand. Deep learning platforms use a method called automatic differentiation to calculate the adjustments automatically. This allowed researchers to rapidly explore a huge space of models, and find the ones that really worked, without needing to know the underlying math.

But what about problems like climate modeling, or financial planning, where the underlying scenarios are fundamentally uncertain? For these problems, calculus alone is not enough — you also need probability theory. The "score" is no longer just a deterministic function of the parameters. Instead, it's defined by a stochastic model that makes random choices to model unknowns. If you try to use deep learning platforms on these problems, they can easily give the wrong answer. To fix this problem, MIT researchers developed ADEV, which extends automatic differentiation to handle models that make random choices. This brings the benefits of AI programming to a much broader class of problems, enabling rapid experimentation with models that can reason about uncertain situations.

Lead author and MIT electrical engineering and computer science PhD student Alex Lew says he hopes people will be less wary of using probabilistic models now that there’s a tool to automatically differentiate them. “The need to derive low-variance, unbiased gradient estimators by hand can lead to a perception that probabilistic models are trickier or more finicky to work with than deterministic ones. But probability is an incredibly useful tool for modeling the world. My hope is that by providing a framework for building these estimators automatically, ADEV will make it more attractive to experiment with probabilistic models, possibly enabling new discoveries and advances in AI and beyond.”

Sasa Misailovic, an associate professor at the University of Illinois at Urbana-Champaign who was not involved in this research, adds: "As the probabilistic programming paradigm is emerging to solve various problems in science and engineering, questions arise on how we can make efficient software implementations built on solid mathematical principles. ADEV presents such a foundation for modular and compositional probabilistic inference with derivatives. ADEV brings the benefits of probabilistic programming — automated math and more scalable inference algorithms — to a much broader range of problems where the goal is not just to infer what is probably true but to decide what action to take next."

In addition to climate modeling and financial modeling, ADEV could also be used for operations research — for example, simulating customer queues for call centers to minimize expected wait times, by simulating the wait processes and evaluating the quality of outcomes — or for tuning the algorithm that a robot uses to grasp physical objects. Co-author Mathieu Huot says he’s excited to see ADEV "used as a design space for novel low-variance estimators, a key challenge in probabilistic computations."

The research, awarded the SIGPLAN Distinguished Paper award at POPL 2023, is co-authored by Vikash Mansighka, who leads MIT's Probabilistic Computing Project in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory, and helps lead the MIT Quest for Intelligence, as well as Mathieu Huot and Sam Staton, both at Oxford University. Huot adds, "ADEV gives a unified framework for reasoning about the ubiquitous problem of estimating gradients unbiasedly, in a clean, elegant and compositional way." The research was supported by the National Science Foundation, the DARPA Machine Common Sense program, and a philanthropic gift from the Siegel Family Foundation.

"Many of our most controversial decisions — from climate policy to the tax code — boil down to decision-making under uncertainty. ADEV makes it easier to experiment with new ways to solve these problems, by automating some of the hardest math," says Mansinghka. "For any problem that we can model using a probabilistic program, we have new, automated ways to tune the parameters to try to create outcomes that we want, and avoid outcomes that we don't."

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

February: maths research | News and features



A new research programme involving six UK universities, including Bristol, will help tackle cybercrime and increase resilience and carbon reduction in the electricity sector.

The Network Stochastic Processes and Time Series (NeST) programme aims to develop new ways of extracting useful information from particular types of huge, complex datasets.

The aim is to achieve a step change in the modelling and analysis of vast banks. These banks are ever-growing, often interconnected data relating to customer needs and behaviour and the performance of systems and equipment.

Across many sectors, this will make it easier to pinpoint problems and opportunities, make accurate predictions and plan robustly.

Dovetailing leading-edge expertise in statistics, probability theory and data science, the six-year programme is being funded by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

NeST involves six universities:University of Bath
University of Bristol
Imperial College London
University of Oxford
University of York
London School of Economics

It also involves a range of companies and government organisations. Partners include:BT
Microsoft
the Office for National Statistics
Financial Network Analytics
Government Communications Headquarters

The ambition is for NeST to establish itself as the world’s leading research centre in the development of new theory, methods and computational techniques. It will focus on tackling the mathematical and statistical analysis of datasets generated by ‘dynamic networks’.

These include not just IT networks, big and small, but also networks in the wider and more traditional sense. For example, the railway network and all the railway lines and connection points (such as stations, where the network connects with customers) that this incorporates.

The dynamic aspect of networks is particularly important: most datasets are not static but are constantly evolving and growing.

This maths research has multiple potential fields of application and is targeting, for example:More secure, greener power grids: greater use of renewables is key to the UK’s energy security and its ability to achieve net zero carbon emissions. Integrating intermittent energy sources such as wind and solar requires sophisticated forecasting of net demand on power networks. NeST will develop computer models and simulations that help meet this challenge.
Better detection of cyberattacks: in 2022, cybercrime cost global businesses, consumers and governments an estimated £1 trillion. Innovative tools are urgently needed to make IT networks safer. NeST will develop new ways of analysing network traffic to pinpoint tell-tale changes indicative of cyberattacks, enabling earlier detection and reducing damage caused.
Stronger protection of human rights. Some of the greatest harms to society arise through organised forms of exploitation, including human trafficking and corruption. NeST will apply dynamic network methods to real-world data collected to tackle such challenges, enabling deeper insights into the structure and dynamics of exploitation, as well as the networks of relationships that allow such organised exploitation to take place.
Improved mail services: mail companies face many logistical challenges to enhance the efficiency of their services. NeST will help them match resources to changing demand and better utilise their distribution infrastructure and vehicle fleets. Benefits will include improved services for business and the public, plus significant cuts to carbon footprints.

Jane Nicholson, EPSRC Director for Research Base, said: “The NeST programme demonstrates the fundamental importance of the mathematical sciences to important sectors such as energy, transport and cybersecurity. The team’s work in establishing itself as a leader in the study and exploitation of dynamic networks, which will reflect the fact that the data which underpin these critical sectors is constantly changing, will deliver benefits for industry and key services which impact on our daily lives.”

Professor Patrick Rubin-Delanchy, one of the Deputy Directors of NeST from the School of Mathematics at the University of Bristol, added: “We can solve important scientific and societal problems if we can better understand dynamic networks. This grant will allow us to build a national centre bringing a highly diverse community of researchers together, with backgrounds in statistics, probability and data science, to advance our understanding and techniques.




4th Edition of  International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Thursday, June 8, 2023

AI Insights - AlphaTensor - First AlphaZero math extension opens new research Possibilities





DeepMind AI creates quicker algorithms to solve complex mathematical puzzles. Since the beginning of time, algorithms have helped mathematicians do basic tasks. For example, ancient Egyptians came up with a way to multiply two numbers without using a table, and Euclid, a Greek mathematician, wrote about a way to find the greatest common divisor that is still used today.

In their paper, which came out today in Nature, they describe AlphaTensor, the first artificial intelligence (AI) system that can find new, efficient, and provably correct ways to do basic tasks like multiplying matrices. It helps answer a 50-year-old math question about how to multiply two matrices as quickly as possible.

This paper is a step toward DeepMind's goal of using AI to improve science and solve the most fundamental problems. AlphaTensor is based on AlphaZero, an agent that has outperformed humans at board games like chess, go, and shogi. This paper shows how AlphaZero went from playing games to solving math problems for the first time.

This paper is a step toward DeepMind's goal of using AI to improve science and solve the most fundamental problems. AlphaTensor is based on AlphaZero, an agent that has outperformed humans at board games like chess, go, and shogi. This paper shows how AlphaZero went from playing games to solving math problems for the first time.

Matrix multiplication

Matrix multiplication is one of the easiest things to do in algebra, and most high school math classes teach it. But outside of the classroom, this simple math operation significantly impacts today's digital world and is used everywhere in modern computing.

This operation is used to process images on smartphones, recognize speech commands, make graphics for computer games, run simulations to predict the weather, compress data and videos to share on the internet, and so much more. In addition, companies worldwide spend a lot of time and money making computer hardware that can multiply matrices quickly and satisfactorily. So, even small changes that improve matrix multiplication can have a significant effect.

Mathematicians long thought the standard matrix multiplication algorithm was the most efficient way to do things. But in 1969, a German mathematician named Volker Strassen shocked the math world by showing that there are better algorithms.

In their paper, the researchers looked at how new AI techniques could make it easier for computers to find new ways to multiply matrices. AlphaTensor found algorithms that work better than the current state of the art for many different sizes of matrices. These algorithms were made possible by the progress of human intuition. Furthermore, their algorithms work better than those made by humans, which is a big step forward in algorithmic discovery.

Conclusion

AlphaTensor is trained from scratch to find matrix multiplication algorithms that are better than those made by humans or computers. Even though AlphaTensor is better than known algorithms, the researchers say that one drawback is that you must define a set of possible factor entries F ahead of time. It limits the search space, but it could mean you miss out on efficient algorithms.

Changing AlphaTensor to look for F could be an exciting direction for future research. AlphaTensor's ability to support complex stochastic and non-differentiable rewards (from the tensor rank to practical efficiency on specific hardware) and find algorithms for custom operations in a wide range of spaces is one of its most essential strengths (such as finite fields). In addition, the researchers think this will make it easier for people to use AlphaTensor to create algorithms that optimize metrics.

The researchers also say that we can use their method to solve related simple math problems, like figuring out other ways to measure rank and NP-hard matrix factorization problems. By using DRL to solve a core NP-hard computational problem in mathematics (the computation of tensor ranks), AlphaTensor shows that DRL can be used to solve complex mathematical problems and could help mathematicians find new things.

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Tuesday, June 6, 2023

Building the Mathematical Library of the Future





Every day, dozens of like-minded mathematicians gather on an online forum called Zulip to build what they believe is the future of their field.

They’re all devotees of a software program called Lean. It’s a “proof assistant” that, in principle, can help mathematicians write proofs. But before Lean can do that, mathematicians themselves have to manually input mathematics into the program, translating thousands of years of accumulated knowledge into a form Lean can understand.

To many of the people involved, the virtues of the effort are nearly self-evident.

“It’s just fundamentally obvious that when you digitize something you can use it in new ways,” said Kevin Buzzard of Imperial College London. “We’re going to digitize mathematics and it’s going to make it better.”

Digitizing mathematics is a longtime dream. The expected benefits range from the mundane — computers grading students’ homework — to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems. Mathematicians expect that proof assistants could also review journal submissions, finding errors that human reviewers occasionally miss, and handle the tedious technical work that goes into filling in all the details of a proof.

But first, the mathematicians who gather on Zulip must furnish Lean with what amounts to a library of undergraduate math knowledge, and they’re only about halfway there. Lean won’t be solving open problems anytime soon, but the people working on it are almost certain that in a few years the program will at least be able to understand the questions on a senior-year final exam.

And after that, who knows? The mathematicians participating in these efforts don’t fully anticipate what digital mathematics will be good for.

“We don’t really know where we’re headed,” said Sébastien Gouëzel of the University of Rennes.

You Plan, Lean Chops

Over the summer, a group of experienced Lean users ran an online workshop called “Lean for the Curious Mathematician.” In the first session, Scott Morrison of the University of Sydney demonstrated how to write a proof in the program.

He began by typing the statement he wanted to prove in syntax Lean understands. In plain English, it translates to “There are infinitely many prime numbers.” There are several ways to prove this statement, but Morrison wanted to use a slight modification of the first one ever discovered, Euclid’s proof from 300 BCE, which involves multiplying all known primes together and adding 1 to find a new prime (either the product itself or one of its divisors will be prime). Morrison’s choice reflected something basic about using Lean: The user has to come up with the big idea of the proof on their own.

“You’re responsible for the first suggestion,” Morrison said in a later interview.

After typing the statement and selecting a strategy, Morrison spent a few minutes laying out the structure of the proof: He defined a series of intermediate steps, each of which was relatively simple to prove on its own. While Lean can’t come up with the overall strategy of a proof, it can often help execute smaller, concrete steps. In breaking the proof into manageable sub-tasks, Morrison was a bit like a chef instructing line cooks to chop an onion and simmer a stew. “It’s at this point that you hope Lean takes over and starts being helpful,” Morrison said.

Lean performs these intermediate tasks by using automated processes called “tactics.” Think of them as short algorithms tailored to perform a very specific job.

As he worked through his proof, Morrison ran a tactic called “library search.” It trawled Lean’s database of mathematical results and returned some theorems that it thought could fill in the details of a particular section of the proof. Other tactics perform different mathematical chores. One, called “linarith,” can take a set of inequalities among, say, two real numbers, and confirm for you that a new inequality involving a third number is true: If a is 2 and b is greater than a, then 3a + 4b is greater than 12. Another does most of the work of applying basic algebraic rules like associativity.

“Two years ago you would have had to [apply the associative property] yourself in Lean,” said Amelia Livingston, an undergraduate math major at Imperial College London who is learning Lean from Buzzard. “Then [someone] wrote a tactic that can do it all for you. Every time I use it, I get very happy.”

Altogether, it took Morrison 20 minutes to complete Euclid’s proof. In some places he filled in the details himself; in others he used tactics to do it for him. At each step, Lean checked to make sure his work was consistent with the program’s underlying logical rules, which are written in a formal language called dependent type theory.

“It’s like a sudoku app. If you make a move that’s not valid, it will go buzz,” Buzzard said. At the end, Lean certified that Morrison’s proof worked.

The exercise was exciting in the way it always is when technology steps in to do something you used to do yourself. But Euclid’s proof has been around for more than 2,000 years. The kinds of problems mathematicians care about today are so complicated that Lean can’t even understand the questions yet, let alone support the process of answering them.

“It will likely be decades before this is a research tool,” said Heather Macbeth of Fordham University, a fellow Lean user.

So before mathematicians can work with Lean on the problems they really care about, they have to equip the program with more mathematics. That’s actually a relatively straightforward task.

“Lean being able to understand something is pretty much just a matter of human beings having [translated math textbooks] into the form Lean can understand,” Morrison said.

Unfortunately, straightforward doesn’t mean easy, especially considering that for a lot of mathematics, textbooks don’t really exist.
Scattered Knowledge

If you didn’t study higher math, the subject probably seems exact and well-documented: Algebra I leads into algebra II, pre-calculus leads into calculus, and it’s all laid out right there in the textbooks, answer key in the back.

But high school and college math — even a lot of graduate school math — is a vanishingly small part of the overall knowledge. The vast majority of it is much less organized.

There are huge, important areas of math that have never been fully written down. They’re stored in the minds of a small circle of people who learned their subfield of math from people who learned it from the person who invented it — which is to say, it exists nearly as folklore.

There are other areas where the foundational material has been written down, but it’s so long and complicated that no one has been able to check that it’s fully correct. Instead, mathematicians simply have faith.

“We rely on the reputation of the author. We know he’s a genius and a careful guy, so it must be correct,” said Patrick Massot of Paris-Saclay University.

This is one reason why proof assistants are so appealing. Translating mathematics into a language a computer can understand forces mathematicians to finally catalog their knowledge and precisely define objects.

Assia Mahboubi of the French national research institute Inria recalls the first time she realized the potential of such an orderly digital library: “It was fascinating for me that one could capture, in theory, the whole mathematical literature by the sheer language of logic and store a corpus of math in a computer and check it and browse it using these pieces of software.”

Lean isn’t the first program with this potential. The first, called Automath, came out in the 1960s, and Coq, one of the most widely used proof assistants today, came out in 1989. Coq users have formalized a lot of mathematics in its language, but that work has been decentralized and unorganized. Mathematicians worked on projects that interested them and only defined the mathematical objects needed to carry their projects out, often describing those objects in unique ways. As a result, the Coq libraries feel jumbled, like an unplanned city.

“Coq is an old man now, and it has a lot of scars,” said Mahboubi, who has worked with the program extensively. “It’s been collaboratively maintained by many people over time, and it has known defects due to its long history.”

In 2013, a Microsoft researcher named Leonardo de Moura launched Lean. The name reflects de Moura’s desire to create a program with an efficient, uncluttered design. He intended the program to be a tool for checking the accuracy of software code, not mathematics. But checking the correctness of software, it turns out, is a lot like verifying a proof.

“We built Lean because we care about software development, and there is this analogy between building math and building software,” said de Moura.

When Lean came out, there were plenty of other proof assistants available, including Coq, which is the most similar to Lean — the logical foundations of both programs are based on dependent type theory. But Lean represented a chance to start fresh.

Mathematicians gravitated to it quickly. They were such enthusiastic adopters of the program that they started to consume de Moura’s time with their math-specific development questions. “He got a bit sick of having to manage the mathematicians and said, ‘How about you guys make a separate repository?’” said Morrison.

Mathematicians created that library in 2017. They called it mathlib and eagerly began to fill it with the world’s mathematical knowledge, making it a kind of 21st-century Library of Alexandria. Mathematicians created and uploaded pieces of digitized mathematics, gradually building a catalog for Lean to draw on. And because mathlib was new, they could learn from the limitations of older systems like Coq and pay extra attention to how they organized the material.

“There’s a real effort to make a monolithic library of math in which all the pieces work with all the other pieces,” said Macbeth.
The Mathlib of Alexandria

The front page of mathlib features a real-time dashboard that charts the project’s progress. It has a leaderboard of top contributors, ranked by the number of lines of code they’ve created. There’s also a running tally of the total amount of mathematics that has been digitized: As of early October, mathlib contained 18,416 definitions and 38,315 theorems.

These are the ingredients that mathematicians can mix together in Lean to make mathematics. Right now, despite those numbers, it’s a limited pantry. It contains almost nothing from complex analysis or differential equations — two basic elements of many fields of higher math — and it doesn’t know enough to even state any of the Millennium Prize problems, the Clay Mathematics Institute’s list of the most important problems in mathematics.

But mathlib is slowly filling out. The work has the air of a barn raising. On Zulip, mathematicians identify definitions that need to be created, volunteer to write them and quickly provide feedback on each other’s work.

“Any research mathematician can look at mathlib and see 40 things it’s missing,” Macbeth said. “So you decide to fill in one of those holes. It really is instant gratification. Someone else reads it and comments on it within 24 hours.”

Many of the additions are small, as Sophie Morel of the École Normale Supérieure in Lyon discovered during the “Lean for the Curious Mathematician” workshop this summer. The conference organizers gave the participants relatively simple mathematical statements to prove in Lean as practice. While working on one of them, Morel realized her proof called for a lemma — a type of short steppingstone result — that mathlib didn’t have.

“It was a very small thing about linear algebra that somehow wasn’t yet there. The people who write mathlib try to be thorough, but you can never think of everything,” said Morel, who coded the three-line lemma herself.

Other contributions are more momentous. For the last year, Gouëzel has been working on a definition of “smooth manifold” for mathlib. Smooth manifolds are spaces — like lines, circles and the surface of a ball — that play a fundamental role in the study of geometry and topology. They also often feature in big results in areas like number theory and analysis. You couldn’t hope to do most forms of mathematical research without defining one.

But smooth manifolds come in different guises, depending on the context. They can be finite-dimensional or infinite-dimensional, have “boundary” or not have boundary, and be defined over a variety of number systems, such as the real, complex or p-adic numbers. Defining a smooth manifold is almost like trying to define love: You know it when you see it, but any strict definition is likely to exclude some obvious instances of the phenomenon.

“For a basic definition, you don’t have any choice [for how you define it],” Gouëzel said. “But with more complicated objects, there are maybe 10 or 20 different ways to formalize it.”

Gouëzel had to maintain a balancing act between specificity and generality. “My rule was, I know 15 applications of manifolds that I wanted to be able to state,” he said. “But I didn’t want the definition to be too general, because then you cannot work with it.”

The definition he came up with fills 1,600 lines of code, making it pretty long for a mathlib definition, but maybe slight compared to the mathematical possibilities it unlocks in Lean.

“Now that we have the language, we can start proving theorems,” he said.

Finding the right definition for an object, at the right level of generality, is a major preoccupation of the mathematicians building mathlib. Its creators hope to define objects in a way that’s useful now but flexible enough to accommodate the unanticipated uses mathematicians might have for these objects.

“There’s an emphasis on everything being useful far into the future,” Macbeth said.
Practice Makes Perfectoid

But Lean isn’t just useful — it offers mathematicians the chance to engage with their work in a new way. Macbeth still remembers the first time she tried a proof assistant. It was 2019 and the program was Coq (though she uses Lean now). She couldn’t put it down.

“In one crazy weekend I spent 12 hours a day [on it],” she said. “It was totally addictive.”

Other mathematicians talk about the experience the same way. They say working in Lean feels like playing a video game — complete with the same reward-based neurochemical rush that makes it hard to put the controller down. “You can do 14 hours a day in it and not get tired and feel kind of high the whole day,” Livingston said. “You’re constantly getting positive reinforcement.”

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Researchers find that large language models struggle with math






Mathematics is the foundation of countless sciences, allowing us to model things like planetary orbits, atomic motion, signal frequencies, protein folding, and more. Moreover, it’s a valuable testbed for the ability to problem solve, because it requires problem solvers to analyze a challenge, pick out good methods, and chain them together to produce an answer.

Prior research has demonstrated the usefulness of AI that has a firm grasp of mathematical concepts. For example, OpenAI recently introduced GPT-f, an automated prover and proof assistant for the Metamath formalization language. GPT-f found new short proofs that have been accepted into the main Metamath library, the first time a machine learning-based system contributed proofs that were adopted by a formal mathematics community. For its part, Facebook also claims to have experimented successfully with math-solving AI algorithms. In a blog post last January, researchers at the company said they’d taught a model to view complex mathematical equations “as a kind of language and then [treat] solutions as a translation problem.”

“While most other text-based tasks are already nearly solved by enormous language models, math is notably different. We showed that accuracy is slowly increasing and, if trends continue, the community will need to discover conceptual and algorithmic breakthroughs to attain strong performance on math,” the coauthors wrote. “Given the broad reach and applicability of mathematics, solving math datasets with machine learning would be of profound practical and intellectual significance.”\

To measure the problem-solving ability of large and general-purpose language models, the researchers created a dataset called MATH, which consists of 12,500 problems taken from high school math competitions. Given a problem from MATH, language models must generate a sequence that reveals the final answer.

Problems in MATH are labeled by difficulty from 1 to 5 and span seven subjects, including geometry, algebra, calculus, statistics, linear algebra, and number theory. They also come with step-by-step solutions so that language models can learn to answer new questions they haven’t seen before.

Training models on the fundamentals of mathematics required the researchers to create a separate dataset with hundreds of thousands of solutions to common math problems. This second dataset, the Auxiliary Mathematics Problems and Solutions (AMPS), comprises more than 100,000 problems from Khan Academy with solutions and over 5 million problems generated using Mathematica scripts based on 100 hand-designed modules. In total, AMPS contains 23GB of content.

As the researchers explain, the step-by-step solutions in the datasets allow the language models to use a “scratch space” much like a human mathematician might. Rather than having to arrive at the correct answer right away, models can first “show their work” in partial solutions that step toward the right answer.

Even with the solutions, the coauthors found that accuracy remained low for the large language models they benchmarked: GPT-3 and GPT-2, GPT-3’s predecessor. Having the models generate their own solutions before producing an answer actually degraded accuracy because while many of the steps were related to the question, they were illogical. Moreover, simply increasing the amount of training time and the number of parameters in the models, which sometimes improves performance, proved to be impractically costly. (In machine learning, parameters are variables whose values control the learning process.)

This being the case, the researchers showed that step-by-step solutions still provide benefits in the form of improved performance. In particular, providing models with solutions at training time increased accuracy substantially, with pretraining on AMPS boosting accuracy by around 25% — equivalent to a 15 times increase in model size.

“Despite these low accuracies, models clearly possess some mathematical knowledge: they achieve up to 15% accuracy on the easiest difficulty level, and they are able to generate step-by-step solutions that are coherent and on-topic even when incorrect,” the coauthors wrote. “Having models train on solutions increases relative accuracy by 10% compared to training on the questions and answers directly.”

The researchers have released MATH and AMPS in open source to, along with existing mathematics datasets like DeepMind’s, spur further research along this direction.

Xsolla, a global video game commerce company, announced Xsolla Drops, a new tool to augment and scale influencer and affiliate programs.

The expansion to the existing Xsolla Partner Network solution is an efficient solution for game developers and publishers to create and scale performance-based influencer and affiliate programs, the company said. Chris Hewish, president of Xsolla, made the announcement in a fireside chat on Monday at our GamesBeat Summit 2023 event, where Xsolla is a major sponsor.

Xsolla Drops provides an added layer of marketing support for game developers to increase user acquisition and incremental sales by streamlining the ability to promote their digital items, including virtual currencies, skins, NFTs, game keys, premium subscriptions, and more.

“Drops gives game developers and publishers the ability to build and reward their gaming audience easily,” said Alexander Menshikov, business head at Xsolla, in a statement. “We are helping game developers reward their fans, explore new user acquisition methods, and strengthen long-term engagement with a game’s current player base by offering exclusive in-game items and unique experiences. With Drops, developers will create game-specific campaigns with targeted audiences, delivering a personalized experience on a custom landing page with no code required.”

As user acquisition costs rise, developer budgets are taking a hit as they attempt to navigate which marketing channels are the most effective, Xsolla said. Drops take a multi-tiered campaign approach to this issue by bringing the thrill of game discovery back to the players, creating inherent value for creators, and raising a game’s overall brand awareness.

This comprehensive, new marketing tool solves the challenge of the rising costs of acquiring, engaging, and retaining players with branded websites and close collaboration with influencers, artists, esports pro gamers, celebrities, and renowned agencies.

“We were thrilled with the results of our Drops campaign with Xsolla,” said Scott Robinson, owner of SprintGP, in a statement. “The onboarding process was easy and only took a few hours from start to finish. All we needed to do was to fill out the form, upload game design assets, and provide redemption instructions for players to get a reward. Xsolla handled the rest, including web page development and marketing setup. We doubled our user base in less than 24 hours and saw a 300% increase in website traffic. We highly recommend Xsolla’s Drops tool to any game developer looking for new ways to drive user acquisition and engagement.”

4th Edition of International Conference on Mathematics and Optimization Methods

Website Link:https://maths-conferences.sciencefather.com/

Award Nomination: https://x-i.me/XU6E

Instagram: https://www.instagram.com/maths98574/

Twitter: https://twitter.com/AnisaAn63544725

Pinterest: https://in.pinterest.com/maxconference20022/

#maths #numericals #algebra #analysis #analysis #mathmatics #numericals #number #complex #graphics #graphs

Intervention based on science of reading and math boosts comprehension and word problem-solving skills New research from the University of ...