Go to file
2023-04-06 01:29:12 -04:00
benchmarks Python benchmark 2023-04-06 01:29:12 -04:00
executors LeetExecutor implementation 2023-04-04 21:36:12 -04:00
generators LeetExecutor implementation 2023-04-04 21:36:12 -04:00
lazzzy@404c06a5bf . 2023-03-28 21:10:06 -04:00
media images 2023-03-22 12:22:37 -04:00
root . 2023-04-03 23:28:12 -04:00
scratch . 2023-03-28 21:10:06 -04:00
.gitignore LeetExecutor implementation 2023-04-04 21:45:28 -04:00
.gitmodules LeetExecutor implementation 2023-04-04 21:36:12 -04:00
generate_dataset.py resume and optz 2023-04-03 16:57:54 -04:00
LICENSE Initial commit 2023-03-22 02:38:53 -04:00
main.py Python benchmark 2023-04-06 01:29:12 -04:00
plot.py all 2023-03-22 02:41:57 -04:00
README.md rerun and prep for rust impl 2023-03-26 14:34:49 -04:00
reflexion_mbpp_py_logs . 2023-04-03 19:54:57 -04:00
reflexion_ucs.py resume and optz 2023-04-03 16:57:54 -04:00
reflexion.py resume and optz 2023-04-03 16:57:54 -04:00
requirements.txt working on ucs version of reflexion 2023-03-28 11:23:29 -04:00
run_reflexion_ucs.sh . 2023-03-30 23:10:58 -04:00
run_reflexion.sh . 2023-04-03 19:54:57 -04:00
run_simple.sh . 2023-04-03 20:57:44 -04:00
simple_mbpp_py2_logs . 2023-04-03 23:28:12 -04:00
simple_mbpp_py_logs . 2023-04-03 19:54:57 -04:00
simple.py resume and optz 2023-04-03 16:57:54 -04:00
utils.py resume and optz 2023-04-03 16:57:54 -04:00
validate_py_results.py validate 2023-03-29 03:23:59 -04:00
validate_rs_results.py validate rs 2023-04-01 21:33:08 -04:00

Mastering HumanEval with Reflexion

This is a spin-off project inspired by the paper: Reflexion: an autonomous agent with dynamic memory and self-reflection. Noah Shinn, Beck Labash, Ashwin Gopinath. Preprint, 2023

Read more about this project in this post.

Check out an interesting type-inference implementation here: OpenTau

Check out the code for the original paper here

If you have any questions, please contact noahshinn024@gmail.com

architecture

result

Note

Due to the nature of these experiments, it may not be feasible for individual developers to rerun the results due to limited access to GPT-4 and significant API charges. Due to recent requests, both trials have been rerun once more and are dumped in ./root with a script here to validate the solutions with the unit tests provided by HumanEval.

To run the validation on your log files or the provided log files:

python ./validate_py_results.py <path to jsonlines file>

Warning

Please do not run the Reflexion agent in an unsecure environment as the generated code is not validated before execution.

Cite

Note: This is a spin-off implementation that implements a relaxation on the internal success criteria proposed in the original paper.

@article{shinn2023reflexion,
  title={Reflexion: an autonomous agent with dynamic memory and self-reflection},
  author={Shinn, Noah and Labash, Beck and Gopinath, Ashwin},
  journal={arXiv preprint arXiv:2303.11366},
  year={2023}
}