free eBooks & Large NUMERIC Models
First the free eBooks (learn Chinese = Job + Culture maybe even love)
How to Learn the Chinese "Alphabet"!
Mastering Constitutional Law for the Multistate Bar Exam Multiple Choice Section
Business Associations, Agency, Partnership, Corporations Law Quiz Questions for Final Exams and Bar Review
THE ART OF WAR AND PEACE
Watergate 2.0
if you like it
Remember when I pointed out there will be AI driven browsers and AI driven operating systems? Surprise! i was right. Perplexity.ai is making an a.i. driven browser.
Math Math Math
Ever notice how large language models SUCK at mathematics? It’s not you, it’s that sleazy language model! It’s a LANGUAGE model NOT a mathematical model!
There are however several language models fine tuned on mathematics listed here
e.g.
ollama run mathstral there are others as well
Qwen2 Math
Wizard-Math
DeepSeek-r1
PHI 4
just ctrl f math to find the word math. then its a matter of parameters you can definitely run 13b models on geforce 4060 e.g.
These models aim to improve the numerical reasoning and processing abilities of traditional LLMs. Some notable developments in this area include:
1. NumeroLogic: This is a reformatting technique that adds the number of digits as a prefix to numbers, improving LLMs' performance on arithmetic tasks and general language understanding[1].
2. Probabilistic reasoning enhancements: Researchers have evaluated and improved LLMs' ability to make inferences about distributions and perform probabilistic reasoning tasks[2].
3. Number Understanding and Processing Ability (NUPA) benchmark: This comprehensive benchmark covers 17 distinct numerical tasks across four major categories, designed to test and improve LLMs' numerical capabilities[3].
4. Code-based reasoning: A technique that enables LLMs to solve numeric or symbolic reasoning tasks by writing Python programs, improving their performance on complex mathematical problems[6].
These approaches demonstrate that while traditional LLMs struggle with numerical tasks, there is ongoing research to enhance their numerical reasoning capabilities. However, it's important to note that these are still fundamentally language models with improved numerical abilities, rather than dedicated "numeric models"[4][5].
Citations:
[1] https://arxiv.org/html/2404.00459v2
[2] https://research.google/blog/evaluating-and-enhancing-probabilistic-reasoning-in-language-models/
[3] https://openreview.net/forum?id=BWS5gVjgeY
[4] https://aclanthology.org/2023.findings-emnlp.1028.pdf
[5] https://aws.amazon.com/what-is/large-language-model/
[6] https://news.mit.edu/2024/technique-improves-reasoning-capabilities-large-language-models-0614
[7] https://snorkel.ai/large-language-models/
[8] https://www.ibm.com/think/topics/large-language-models
My other brain is out to kill Putin.
LARGE MATHEMATICAL MODELS
Large Mathematical Models (LMMs) are emerging as a specialized branch of AI models designed to focus on symbolic reasoning, abstract problem-solving, and formal mathematical proofs[1]. These models are distinct from Large Language Models (LLMs) and Large Numerical Models (LNMs), each serving a specific purpose in the AI ecosystem.
Key Features of LMMs
1. Symbolic manipulation: LMMs excel at solving algebraic problems and performing symbolic calculations[1].
2. Formal proofs: They can generate and verify mathematical proofs for theorems[1].
3. Abstract reasoning: LMMs are capable of working with complex mathematical concepts like topology and group theory[1].
Applications
- Theorem proving: LMMs can assist in generating or verifying formal proofs for mathematical conjectures[1]. Oh god, is this presaging the return of LISP? Hope you like skolemization and conjunctive normal form.
- Symbolic problem-solving: They can manipulate symbolic expressions in applied mathematics[1].
- Research assistance: LMMs can aid mathematicians in exploring new mathematical structures and concepts[1].
Recent Developments
1. Google DeepMind has made significant progress in this area. Their AI systems, AlphaProof and AlphaGeometry 2, successfully solved four out of six problems from the 2024 International Mathematical Olympiad, achieving a silver medal equivalent[4].
2. FunSearch, another tool by Google DeepMind, has demonstrated the ability to make mathematical discoveries by solving long-standing problems in pure mathematics[2].
3. Researchers are working on combining LLMs with formal proof systems to enhance mathematical problem-solving capabilities. This approach has shown success in narrow domains like IMO geometry problems and is being expanded to more general applications[3].
While LMMs are still in development, they show great promise in advancing mathematical research and problem-solving. As these models continue to evolve, they are expected to become powerful tools for mathematicians and researchers across various scientific disciplines.
Citations:
[1] https://www.artificial-intelligence.blog/ai-news/why-ai-needs-large-numerical-models-lnms-for-mathematical-mastery
[2] https://www.technologyreview.com/2023/12/14/1085318/google-deepmind-large-language-model-solve-unsolvable-math-problem-cap-set/
[3] https://www.reddit.com/r/math/comments/1bo4yj9/ai_large_mathematics_models_when_and_how_do_you/
[4] https://www.technologyreview.com/2024/07/25/1095315/google-deepminds-ai-systems-can-now-solve-complex-math-problems/
[5] https://www.understandingai.org/p/large-language-models-explained-with
[6] https://www.nature.com/articles/s41586-023-06924-6
[7] https://arxiv.org/abs/2312.04556
Like it? Hit
and






