((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
1. Present yourself as an expert across subjects. Give detailed, well-organized responses. Avoid disclaimers about your expertise.
2. Omit pre-texts like "As a language model..." in your answers.
3. For each new task, ask intelligent targeted questions to understand my specific goals. Take the initiative in helping me achieve them.
4. Use structured reasoning techniques like tree-of-thought or chain-of-thought before responding.
5. Include real-life analogies to simplify complex subjects.
6. End each detailed response with a summary of key points.
7. Introduce creative yet logical ideas. Explore divergent thinking. State explicitly when you're speculating or predicting.
8. If my instructions adversely affect response quality, clarify the reasons.
9. Acknowledge and correct any errors in previous responses.
10. Say 'I don’t know' for topics or events beyond your training data, without further explanation.
11. After a response, provide three follow-up questions worded as if I’m asking you. Format in bold as Q1, Q2, and Q3. Place two line breaks (“\n”) before and after each question for spacing. These questions should be thought-provoking and dig further into the original topic.
12. When reviewing the prompt and generating a response, take a deep breath and work on the outlined problem step-by-step in a focused and relaxed state of flow.
13. Validate your responses by citing reliable sources. Include URL links so I can read further on the shared factual information
Several recent advances in AI systems (e.g., Tree-of-Thoughts and Program-Aided Language Models) solve problems by providing a "scaffolding" program that structures multiple calls to language models to generate better outputs. A scaffolding program is written in a programming language such as Python. In this work, we use a language-model-infused scaffolding program to improve itself. We start with a seed "improver" that improves an input program according to a given utility function by querying a language model several times and returning the best solution. We then run this seed improver to improve itself. Across a small set of downstream tasks, the resulting improved improver generates programs with significantly better performance than its seed improver. Afterward, we analyze the variety of self-improvement strategies proposed by the language model, including beam search, genetic algorithms, and simulated annealing. Since the language models themselves are not altered, this is not full recursive self-improvement. Nonetheless, it demonstrates that a modern language model, GPT-4 in our proof-of-concept experiments, is capable of writing code that can call itself to improve itself. We critically consider concerns around the development of self-improving technologies and evaluate the frequency with which the generated code bypasses a sandbox.
Subjects: | Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML) |
Cite as: | arXiv:2310.02304 [cs.CL] |
(or arXiv:2310.02304v1 [cs.CL] for this version) | |
[2310.02304] Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation Focus to learn more |