

You need to translate them into lesswrongese before you try interpreting them together.
probability: he made up a number to go with his feelings about a topic
subjective: the number is even more made up and feelings based than is normal for lesswrong
noticeable: the number is really tiny, but big enough for Eliezer to fearmonger about!
No, you don’t get to actually know what the number is, then you could penalize Eliezer for predicting it wrongly or question why that number specifically. Just trust that the bayesianified language shows Eliezer thought really hard about it.
Those are some neat links! I don’t think Eliezer mentions the Godel Machines or the metaheuristic literature anywhere in the sequences, and given his fixation on recursive self improvement he really ought to have. It could be a simple failure to do a proper literature review, or it could be deliberate neglect given that the examples you link show all of these approaches max out (and thus illustrate a major problem with the concept of strong AGI trying to bootstrap to godhood, it is likely to hit diminishing returns).