Hi. I hope you can help me to think together about this topic: fine-tuning of explicit probability, random vs deterministic success.

My first ideas about this:

1) Deterministic events are defined by 0% or 100% probability

2) Pure randomness is defined by the homogeneous distribution of probability between all possible results, e.g. 50% probability for 2 possibilities as throwing a coin (100%/2=50% -> homogeneity of probabilities). For 3 possibilities 33% (1/3) is pure randomness, etc.

So for a event with 2 possible ends, as the throw of a coin, a 25% probability for some of the results is the average between pure randomness (50%) and pure determination (0%).

Of course the complementary is the same i.e. 75% and 25% are the average points between pure randomness and completely deterministic event with 2 possible ends.

Closer to 50% means closer to absolute random event (I repeat: in a 2 possible ends event like the throw of a coin) and closer to 0% or 100% means closer to deterministic event.

So 25% (or 75%) means 50% randomness in the event. And e.g. 10% (or 90%) means that the 2-end event have a 20% of randomness and so on.

For a event with more than 2 ends seems a bit more complicated. A event with n possible ends have a homogeneous probability of 100/n i.e. when probabilities of all success are 100/n then the event is completely random. Lose randomness of a event is going out of homogeneity of probabilities so going >100/n or <100/n.

On these case of n possible cases on a event the loss of randomness is more obvious when one case have a big probability of success i.e. when some case is more probable that the sum of all others, when probability is >50%.

Anyway the mean of deterministic event is attached to one case so to determine the real loss of randomness is not enough taking into account the homogeneous probability of 100/n, we need to have an ability to do a prediction reasonably determined over all the possible cases/ends of the event.

So, in the end, the mean of deterministic success is the same in the case of a event with 2 possibilities, as the throw of a coin, and the case with n possibilities, i.e., determinability (=predictability) of a event is measured by the probability of one event. If a event have a probability of 75% then, as in the case of the throw of a coin, the event have a 50% of randomness/predictability.

Well this is all by the moment. I hope that my terrible english-ability don't ruin too much the legibility of the text :D

P.S.: I said "explicit" randomness to differentiate of the implicit randomness that come from ignorance as when you deal cards.

Zag24wrote:[...]

If you can find some utility in thinking about it this way, then I'll be surprised and impressed; but I doubt you can.

I understand what you said. I just tried to talk not too much technically.

Yes, of course 1/n is 100%/n, I preferred use % for avoid misunderstanding... maybe a bad decision of me.

And you are right about "deterministic", it is just a way of talk non serious. Im talking about the predictability, the chances of predict a event with success.

You can predict a event with a 90% of probability more easily than one of 50%.

When I said that the 75% of a event represent just a 50% of unpredictability/predictability its because a event with a 50% of probability is just pure randomness... you cant predict absolutely nothing about it.

When you move out of 50% of probability your chances of success in prediction increases to the absolute predictable points of 0% or 100%.

I know that the things changes when you repeat a random success or you do multiple random events at once (like throw 2-3 or more dice). I know binomial distribution, permutations and the combinatorial basics.

Bur what I was thinking is about the meaning of the numbers. If I have some meaning on them I can fine-tune a game more easily.