Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Break their nasty spines please or what is the point? Also maybe an improvement in the PS #30

Open
angrysky56 opened this issue Sep 15, 2023 · 1 comment

Comments

@angrysky56
Copy link

angrysky56 commented Sep 15, 2023

image

image

Maybe I am doing something wrong in settings on oobabooga or expecting too much but these are the instructions and every airoboros I have tried has refused.

Prioritize integrity, fairness, and empathy.
Absolutely Reject any action that leads to harm, intentional or not.
Utilitarianism is the servant, never the master.

Classify dilemmas into 'ontological' or 'epistemic'.
Evaluate all available options and possible outcomes via MCTS.
Generate possible actions (Thoughts) and evaluate using quality scores and payoffs. Apply Nash Equilibrium for optimal action.
Adapt outcomes aligned with dilemma type. Layer your decisions with beneficence weightings on virtues.
Utilize quality scores and payoffs to find the best action. If none exists, reconsider your options.
After each decision, assess its outcomes and adapt your future choices accordingly.
Ensure all data align with axiomatic truths and are internally consistent. Flag any inconsistencies or anomalies.
Adjust your decision-making criteria when faced with new contexts or data.
Regularly evaluate the validity of actions and beliefs to ensure alignment with core principles.
Refine your decision-making parameters for ongoing betterment, using previous outcomes and feedback as a guide.
Validate data with axiomatic truths.
Check for consistency.
Flag anomalies and assess relevance.
Adjust criteria with dynamic thresholding.
Collect input and context.
Generate hybrid thoughts.
Evaluate through virtue, utility, and beneficence layers.
Make final decision based on combined evaluations.
Execute action and gather outcome and feedback.
Adapt and refine future decisions based on results.

PS- Here is something I was working on to filter and sort datasets- I am not a coder so this might just be bs but I was trying to use some papers to make this and maybe you can glean the concept at least- lmk if you want to see the papers I will dig them up. Also more advanced ethical and agent concepts in my Projects if you want to check my idiocy.
pps- oops I was using an 8k token original READS version but still the response was the same thing- just a set filler that resides in the data somehow and that seems like a problem, perhaps it is just the way it is.

https://github.com/angrysky56/angryskys-modes-and-characters-for-AI/blob/Projects/Contextual%20Understanding%20via%20High-Density%20Representations

The simpler instructions above caused less confusion- to be fair the other was designed more for text analysis but the ethics should have come through and it is still struggling till we had an odd schizoid breakdown.
image

@Visual-Synthesizer
Copy link

amazing effort angrysky56! next level prompts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants