id: 72b78499c4b54fac9c784f62cf64c178 parent_id: 2291d866cb8c4c91a853f87603e74deb item_type: 1 item_id: c66c0433c6a14de9b9a40e5a7316e382 item_updated_time: 1708544188366 title_diff: "[]" body_diff: "[{\"diffs\":[[0,\"atures. \"],[1,\"Training is done on previous proofs; we know which lemmas were used, these are definitely relevant, so we can try to learn which lemmas are relevant for a given set of features.\\\n\\\nIn practice, training and testing are not as separate as in 'regular' AI; something like Stable Diffusion has a long, expensive training phase, which results in a network which afterwards is only used for evaluation and no longer learns. This is not feasible in proof assistants, as relevant lemmas are often proved one after another, which means training and evaluation are generally more or less interleaved. Hence, training must be fast (ideally, we want something that can train quickly on a new data point, in a kind of \\\"streaming\\\" fashion).\\\n\\\nOptions explored: naive Bayes, k-Nearest Neighbors\\\n\"],[0,\"\\\n\\\n### Tr\"]],\"start1\":18906,\"start2\":18906,\"length1\":16,\"length2\":794}]" metadata_diff: {"new":{},"deleted":[]} encryption_cipher_text: encryption_applied: 0 updated_time: 2024-02-21T19:39:07.840Z created_time: 2024-02-21T19:39:07.840Z type_: 13