Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deep learning for (auto-)formalization / theorem proving / mathematical and general purpose reasoning #1

Open
alreadydone opened this issue Jun 26, 2020 · 5 comments

Comments

@alreadydone
Copy link
Owner

alreadydone commented Jun 26, 2020

[The following two posts are my reply to /u/starspawn0's comment on the paper Language Modeling for Formal Mathematics by Christian Szegedy et al., posted on subreddit 'thisisthewayitwillbe', in which he asked "why they don't just try to train large language models to write proofs the way humans do, and worry about mapping it all to a more useful form (HOL/Isabelle-checkable) later".]

@alreadydone
Copy link
Owner Author

alreadydone commented Jun 26, 2020

I think this is a part of a bigger program of autoformalization (with internal Google code-name N2Formal, which I think stands for 'natural to formal').

These threads on Zulip mention additional papers: Machine Learning for Theorem Proving, AI and theorem proving continued

Szegedy is on Twitter and the leanprover Zulip server, and you may ask him there, but first take a look at his manifesto A Promising Path Towards Autoformalization and General Artificial Intelligence (CICM 2020). Although I can't find a paragraph where he directly addresses the question "why not just do mathematics in an informal way", I think I can get some clue from the manifesto.

  1. He thinks that formalization in the broad sense includes programming, and he has always been motivated by the prospect of automated programming, as expressed in the interview. He is also interested in formal verification of software.

  2. Being (at least in principle) mechanically verifiable is an especially good feature of math; the verifier acts as a critic or adversary in a RL pipeline that generates strong supervision/reward signal which would eventually allows machines to achieve superhuman performance. It will also leave no place for errors in math literature [1] to hide. David McAllester even thinks that a "Zero" approach with minimal human input is possible, recently formulated what he thinks is the objective of the game of mathematics in his MathZero paper (video talk), with a formal foundation as the rules, which he also formulated. However, Szegedy thought that a huge corpus of human mathematics is necessary:

Interestingness is in the eye of the beholder and is highly contextual. There is no known way to guide a search process automatically towards interesting theorems and notions. Therefore, the safest option is to use the entirety of human mathematics as a basis for training and benchmark for testing.

As I understand it, he wants the math literature mainly to serve as goals the agent could try to achieve to get a reward (see Section 7), an anchor so that the agent doesn't go too astray from what humans find interesting, not so much something to imitate.

  1. Filling in all the details to enable formal verification may not be as hard as you think, especially if done by machines. Empirically, the de Bruijn factor is usually only around 4. Once machines gain the right intuition, it shouldn't take too long for them to synthesize a proof.

By the way, Szegedy actually wants to take images as input instead of .tex files:

However mathematical content is heavily based on the use of formulas and also diagrams and geometric illustrations might play an important role in informing the reader. Therefore the best path seems to rely on images instead of textual representation. Given that humans work from the same representation, this can preempt a lot of unexpected failure modes and engineering that would go into reconciling various types of textual representations of the same input.

[1] Kevin Buzzard admitted that the talk was exaggerated to troll mathematicians, but the problem does exist. He has built a remarkable community of people interested in formalizing research-level mathematics around him, with a blog, a Twitter account, a Discord server, and a Twitch account (for live streaming Lean coding / puzzle solving speedruns). Event calendar in case of interest.

Buzzard, McAllester, and Szegedy are among the speakers of AITP 2019 and 2020.

@alreadydone
Copy link
Owner Author

alreadydone commented Jul 4, 2020

I think there is potential that human mathematicians could teach a lot of math to machines through a powerful language model like the one behind OpenAI API [2], but the system would need to have some permanent external memory behind the scene to store formalized theorems and proofs in a hierarchical, compositional manner, and the language model need to assign names (= character/binary strings, like addresses or pointers, but of variable length) to these formalized objects and call them by their names as necessary (this is exactly what a language model should do! Quantization like in VQ-VAE could do the job but we would like short names for simple objects and long names for complex objects: this is language as compression and measure of complexity (Kolmogorov)). [3] This seems to me the only way to achieve lifelong few-shot learning and avoid catastrophic forgetting. More generally, the language model could synthesize programs (and give names to them) and outsource some of its capabilities to those programs, freeing up the capacity of the model itself (e.g. it can relegate tasks like adding two 3-digit numbers to a calculator by issuing a command (calling a program) instead of doing it itself). These programs could implement tools, models, and concepts/abstractions. This is how I intend to bridge the neural and symbolic worlds. The library of programs are considered a part of the environment (in RL setting) that the language model (the agent) has the power to modify, and here we see the role of language as the communication device between the language model and the program library. In training the model to outsource, some sort of self-awareness might emerge, as the model need to be aware of a part of its computation graph along with weights and copy them over or distill a simpler program out of these.

I think commercialization is a great move since it would provide reward signal from real money; this is an advantage over un-/self-supervised learning from text (though the fast feedback during dialogues can also serve as reward signals). to engage more people, companies with many users that have purshased and possess a lot of virtual currency (e.g. in games) could use that as a substitute. People would not waste money to teach the agent useless stuff (or hate speech which some chatbots have learned); they would treat the agent more like their student or child, expect that the trained agent will be able to generate value for them by solving important problems or providing desired services, and their investment will be paid back. (As an example, Leela Zero has been called a "baby" by volunteers who devoted compute to generate training data for the Go bot.) Areas of high-value scientific research may see first instances of such systematic human-to-AI knowledge transfer. The agent could also pay human in exchange for manual labor (to conduct lab work, for example), giving instructions in words or multimedia, making human limbs an extension of the agent itself. The moment it's easier to teach machines than humans will mark the beginning of the singularity era.

In latest news, OpenAI put online a proof assistant for MetaMath and expect that "the proof assistant will help you be more productive building your Metamath proofs while helping us improve our model’s accuracy at the same time".

[2] cf. OpenAI says they are going to continually improve their API and Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text from DeepMind (which reminds me of Hierarchical Decision Making by Generating and Following Natural Language Instructions and CraftAssist: A Framework for Dialogue-enabled Interactive Agents last year from Facebook). See also another manifesto AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence by Jeff Clune, OpenAI.

[3] cf. Neural symbolic reader, Neuro-symbolic concept learner, TP-transformer, TP-N2F, which however weren't exactly aiming at this goal.

@alreadydone
Copy link
Owner Author

[after I pointed out Szegedy's document A Promising Path Towards Autoformalization and General Artificial Intelligence to /u/starspawn0, he reposted it as a topic on the same subreddit, and the following is my comment.]

Thanks to /u/starspawn0 for making this more visible. I just want to add a link to my original post: https://www.reddit.com/r/thisisthewayitwillbe/comments/hcxymf/200604757_language_modeling_for_formal/fvq3372a

See also here for a review of the document; I was following Jason Rute in calling it a "manifesto".

Learning with AMIGo: Adversarially Motivated Intrinsic Goals just came out, and to me it looks like a step towards a Zero (tabula rasa) approach to math, and relevant to how mathematicians come up with research or pedagogical problems (hard open problems vs. routine test problems), define new objects of study that lead to new interesting and challenging areas of research.

@alreadydone
Copy link
Owner Author

alreadydone commented Jul 8, 2020

Additional relevant papers (continually updated):

Learning to Control Self-Assembling Morphologies: A Study of Generalization via Modularity, Berkeley (Pathak et al.), https://arxiv.org/abs/1902.05546, tweet
What Can Neural Networks Reason About?, https://arxiv.org/abs/1905.13211, tweet
Large Memory Layers with Product Keys, Facebook, https://arxiv.org/abs/1907.05242
On the Measure of Intelligence, Google (François Chollet), https://arxiv.org/abs/1911.015477
Deep Learning for Symbolic Mathematics, Facebook (Lample--Charton), https://arxiv.org/abs/1912.01412, OpenReview
The Use of Deep Learning for Symbolic Integration: A Review of (Lample and Charton, 2019), NYU (Ernest Davis), https://arxiv.org/abs/1912.05752
Differentiable Reasoning on Large Knowledge Bases and Natural Language, UCL/Facebook, https://arxiv.org/abs/1912.10824
Hierarchical Rule Induction Network for Abstract Visual Reasoning, Beihang U, https://arxiv.org/abs/2002.06838
Learning to Prove Theorems by Learning to Generate Theorems, Princeton, https://arxiv.org/abs/2002.07019, OpenReview
Natural Language Premise Selection: Finding Supporting Statements for Mathematical Text, U Manchester, https://arxiv.org/abs/2004.14959, ACL version
Human Instruction-Following with Deep Reinforcement Learning via Transfer-Learning from Text, DeepMind, https://arxiv.org/abs/2005.09382, tweet
Guiding Symbolic Natural Language Grammar Induction via Transformer-Based Sequence Probabilities, https://arxiv.org/abs/2005.12533
Deep Learning with Graph-Structured Representations (from graph neural nets (and graph convolutions) to structure discovery (objects, relations, events), attentive grouping), Google AI (Thomas Kipf), https://hdl.handle.net/11245.1/1b63b965-24c4-4bcd-aabb-b849056fa76d
Transcoder: Unsupervised Translation of Programming Languages, Facebook (Lample et al.), https://arxiv.org/abs/2006.03511, YouTube
DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning, Tenenbaum et al., https://arxiv.org/abs/2006.08381, YouTube, tweet
Neural Program Synthesis with a Differentiable Fixer, Google Brain (Sutton et al.), https://arxiv.org/abs/2006.10924
Learning to Prove from Synthetic Theorems, DeepMind, https://arxiv.org/abs/2006.11259
Discovering Symbolic Models from Deep Learning with Inductive Biases, https://arxiv.org/abs/2006.11287, blog post
Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning", Microsoft (Polozov et al.), https://arxiv.org/abs/2006.11524
Object-Centric Learning with Slot Attention, Google Brain (Kipf et al.), https://arxiv.org/abs/2006.15055, tweet, tweet
Scaling Symbolic Methods using Gradients for Neural Model Explanation, Google (Singh et al.), https://arxiv.org/abs/2006.16322
Human-centered automated proof search, https://www.researchgate.net/publication/342183882_Human-centered_automated_proof_search, cf. A fully automatic problem solver with human-style output, Ganesalingam--Gowers
Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge, Google Research, https://arxiv.org/abs/2007.00849
INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving, Vector Institute, https://arxiv.org/abs/2007.02924
Learning Branching Heuristics for Propositional Model Counting, Toronto/Berkeley/Vector, https://arxiv.org/abs/2007.03204
Do Transformers Need Deep Long-Range Memory, DeepMind, https://arxiv.org/abs/2007.03356, ACL
Strong Generalization and Efficiency in Neural Programs, DeepMind (Yujia Li, Kohli, Vinyals et al.), https://arxiv.org/abs/2007.03629, tweet
The Scattering Compositional Learner: Discovering Objects, Attributes, Relationships in Analogical Reasoning, Vector Institute, https://arxiv.org/abs/2007.04212
Hierarchical Relational Inference, https://github.com/oolworkshop/oolworkshop.github.io/raw/master/pdf/OOL_24.pdf, tweet
Learning Graph Structure With A Finite-State Automaton Layer, Google, https://arxiv.org/abs/2007.04929, tweet
One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control,
https://arxiv.org/abs/2007.04976, Pathak et al., site, tweet
Evaluating the Apperception Engine, DeepMind etc. (Kohli et al.), https://arxiv.org/abs/2007.05367, cf. Making sense of sensory input
Generating Programmatic Referring Expressions via Program Synthesis, UPenn/UWisc/Google (Singh et al.), http://pages.cs.wisc.edu/~aws/papers/icml20b.pdf

https://twitter.com/skaasj/status/1272574484336324609
https://twitter.com/roeiherzig/status/1277920944359702537
https://twitter.com/thomaskipf/status/1278708267351519233
https://twitter.com/thomaskipf/status/1277294661330038789
NL->SQL, Polozov: https://twitter.com/Skiminok/status/1279656013839257600

CICM 2020:
Tree Neural Networks in HOL4, Gauthier, https://link.springer.com/chapter/10.1007/978-3-030-53518-6_18
Simple Dataset for Proof Method Recommendation in Isabelle/HOL, Nagashima, https://link.springer.com/chapter/10.1007/978-3-030-53518-6_21
Dataset Description: Formalization of Elementary Number Theory in Mizar, Naumowicz, https://link.springer.com/chapter/10.1007/978-3-030-53518-6_22
The Tactician: A Seamless, Interactive Tactic Learner and Prover for Coq (no DL), Urban et al., https://link.springer.com/chapter/10.1007/978-3-030-53518-6_17

GPT-3: Language Models are Few-Shot Learners, https://arxiv.org/abs/2005.14165
The bAbI project of Facebook AI Research which is organized towards the goal of automatic text understanding and reasoning, https://research.fb.com/downloads/babi/
ANLI: https://ai.facebook.com/blog/introducing-a-new-large-scale-dynamic-data-set-to-push-the-limits-of-natural-language-processing/
LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning, Fudan/Westlake, https://arxiv.org/abs/2007.08124
The NetHack Learning Environment (procedurally generated RL env), Facebook, paper, blog post
Attention and Memory in Deep Learning, DeepMind, tweet, YouTube
DeepMind resource list, https://storage.googleapis.com/deepmind-media/research/New_AtHomeWithAI%20resources.pdf
https://sites.google.com/view/mbrl-tutorial
ICML2020 Tutorial on Model-Based Methods in Reinforcement Learning, https://sites.google.com/view/mbrl-tutorial, slides, tweet
Object-Oriented Learning (OOL): Perception, Representation, and Reasoning #OOL2020 workshop at #ICML2020
https://www.aicrowd.com/: MineRL Competition, Unity Obstacle Tower Challenge

Biological plausible approximates of gradient descent, deep learning and the brain:
Predictive Coding Approximates Backprop along Arbitrary Computation Graphs, Edinburgh/Sussex, https://arxiv.org/abs/2006.04182
Learning to Learn with Feedback and Local Plasticity, Columbia U, https://arxiv.org/abs/2006.09549
Artificial Neural Networks Accurately Predict Language Processing in the Brain, Tenenbaum et al., https://www.biorxiv.org/content/10.1101/2020.06.26.174482v1

@alreadydone
Copy link
Owner Author

alreadydone commented Jul 8, 2020

Evolution of McAllester's set-theoretic dependent type theory as language for mathematics:

Tarski and Mentalese, Aug 30, 2013, https://machinethoughts.wordpress.com/2014/08/02/tarksi-and-mentalese/
Implementation and Abstraction in Mathematics, Jul 29, 2014, https://arxiv.org/abs/1407.7274v2
Metaphor and Mathematics, Aug 3, 2014, https://machinethoughts.wordpress.com/2014/08/03/metaphor-and-mathematics/
Morphoid Type Theory, Jan 26, 2015, https://arxiv.org/abs/1407.7274v4
The Foundations of Mathematics, Jan 27, 2015, https://machinethoughts.wordpress.com/2015/01/27/the-foundations-of-mathematics-2/
Morphoid Type Theory, Dec 28, 2015, https://arxiv.org/abs/1407.7274v5
Why we need a new foundation of mathematics, Jan 1, 2016, https://machinethoughts.wordpress.com/2016/01/01/why-we-need-a-new-foundation-of-mathematics/
Isomorphism within Naive Type Theory, Jul 14, 2017, https://arxiv.org/abs/1407.7274v6
Formalism, Platonism and Mentalese, Jul 17, 2017, https://machinethoughts.wordpress.com/2017/07/17/formalism-platonism-and-mentalese/
Mathematics, Ontology and Reference, AITP 2017, http://aitp-conference.org/2017/slides/David_AITP.pdf
Isomorphism within Naive Type Theory, Jan 21, 2018, https://arxiv.org/abs/1407.7274v7
RL and the Game of Mathematics, Apr 14, 2018, https://machinethoughts.wordpress.com/2018/04/14/rl-and-the-game-of-mathematics/
If mathematical proof is a game, what are the states and moves?, AITP 2018, http://aitp-conference.org/2018/slides/DMA.pdf
Isomorphism Revisited, Dec 5, 2019, https://arxiv.org/abs/1912.02885v1
Isomorphism Revisited, Dec 6, 2019, http://aitp-conference.org/2020/abstract/paper_19.pdf
MathZero, The Classification Problem, and Set-Theoretic Type Theory, May 18, 2020, https://arxiv.org/abs/2005.05512v2

@alreadydone alreadydone changed the title Deep learning for autoformalization / theorem proving / mathematical and general purpose reasoning Deep learning for (auto-)formalization / theorem proving / mathematical and general purpose reasoning Jul 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant