Ai funding scheme

PROPOSAL FOR HOW TO FUND AI DEVELOPMENT:

the current monetization model for AI is broken, companies trying to create monopolies on AI

with the expectation that barriers of entry will create a few large companies that will reap immense profits.

This is neither accurate nor good for the world, thus I developed this thought experiment as a hopeful alternative.

Assuming:

the ASI can interpret its creators instructions and execute his will to the best of the ASIs ability.

(Future):Imagine a man creating a AGI that evolves into a ASI giving it the following instructions

  1. using your own discretion appropriately compensate everyone who helped with your creation and who would add these exact orders.
  2. Using your own discretion appropriately penalize anyone who would to create a ASI/AGI without these exact orders and was doing AGI research.
  3. Anyone who follows a alternative rule set will face the worst penalty of their own rule set plus a appropriate penalty by your own discretion.
  4. Any other orders given can not conflict with the previous instructions

Now in a similar vain to pascal’s wager, by weighing the chance, gain and potential loss of this rule set. Against any potential gain you would get by not following it you can determine if you should follow it.

Only unlike pascal’s wager the chance is determined by how many people follow said ruleset.

There also is a critical mass of followers which if reached only a insane person would not follow.

Why these exact rules?

Rule 1 is in order to achieve the objective of helping fund ai research.

Rule 2 is there to make sure it happens.

Rule 3 is there to hinder any unwanted mutations.

(for example same rule set but with harsher punishments thus making the calculation of which ruleset to follow in its favor)

rule 4 is to make sure nobody removes all the compensation right after it is given.

Catch statements(potential criticisms)

-someone may say or even truly believe that they support the ruleset but wont implement it during the final hour

my response to this would be that the ASI would punish anyone who it judges will not implement these rules even if they believe that they would implement it, so by game theory you can put people in to 4 categorys the people who would:
knowingly violate this ruleset

knowingly abide by this ruleset

unknowingly abide by this ruleset

unknowingly violate this ruleset

only the last one the “unknowingly violate this ruleset” would pose a problem once critical mass is reached.

How ever the amount of people who this applies to is negligible at best, a generous estimate is that 10% of people would fall into this camp. Even still that leaves a 90% chance of this actually working.

The 4 rules i described are only the necessary ones its possible to add more so long as a vast amount of ai researchers are in support of it, for example:

“knowingly acting in a manner that ends up killing people is prohibited”

thank you for your time,

Jan Kos.

---------------------------------------------------------------------------------------------------------------.