More like O(2n) and what do you mean by “this”? The number will increase in O(2n) until it hits the point where it stops. What exactly is bound? The number of posts in total?
#getyourdefinitionsstraight
Sorry, mixed up n2 and 2n. But what I meant was that there’s eventually going to be a point where the limiting factor is the number of people willing to upvote it, which is asymptotically constant (after crossing the threshold of making it onto the front page.)
Both the number of posts and the width of the posts are limited by a constant in this way, though the latter is a much larger constant. I suppose I was talking about the width of the posts, but it would have been more accurate to say it’s bound above by 2^(the number of users on Lemmy.)
In short, I do not think these posts are going to reach a 2048-wide en passant, but I don’t think image size is going to be the reason why.
O notation has a precise definition. A function f : N -> R+ is said to be O(g(x)) (for some g : N -> R) if there exists a constant c so that f(n) <= cg(n) for all sufficiently large n. If f is bounded, then f is O(1).
Yeah, you’re right, I’m not being rigorous here. I’m just co-opting big O notation somewhat inaccurately to express that this isn’t going to get as big as it seems because the number of upvotes isn’t going to increase all that much.
It’s very pedantic, but he does have a point. Similar to how you could view memory usage as O(1) regardless of the algorithm used, just because a computer doesn’t have infinite memory, so it’s always got an upper bound on that.
Only that’s not helpful at all when comparing algorithms, so we disregard that quirk and assume we’re working with infinite memory.
But you just completely ignored everything I said in that comment.
Mathematically, that is precisely how O notation works, only (as I’ve mentioned) we don’t use it like that to get meaningful results. Plus, when looking at time, we can actually use O notation like normal, since computers can indeed calculate something for infinity.
Still, you’re wrong saying that isn’t how it works in general, which is really easy to see if you look at the actual definition of O(g(n)).
Oh, and your computer crashing is a thing that could happen, sure, but that actually isn’t taken into account for runtime analysis, because it only happens with a certain chance. If it would happen after precisely three days every time, then you’d be correct and all algorithms would indeed have an upper bound for time too. However it doesn’t, so we can’t define that upper bound as there will always be calculations breaking it.
Law of exponents is gonna get you son.
You may think this is O(n^2), but it’s actually O(1), bound above by the number of users on Lemmy.
More like O(2n) and what do you mean by “this”? The number will increase in O(2n) until it hits the point where it stops. What exactly is bound? The number of posts in total?
#getyourdefinitionsstraight
Sorry, mixed up n2 and 2n. But what I meant was that there’s eventually going to be a point where the limiting factor is the number of people willing to upvote it, which is asymptotically constant (after crossing the threshold of making it onto the front page.)
Both the number of posts and the width of the posts are limited by a constant in this way, though the latter is a much larger constant. I suppose I was talking about the width of the posts, but it would have been more accurate to say it’s bound above by 2^(the number of users on Lemmy.)
In short, I do not think these posts are going to reach a 2048-wide en passant, but I don’t think image size is going to be the reason why.
According to that logic, everything is O(1) because at some point you go out of memory or your computer crashes.
“How fast is your sorting algorithm?” – “It can’t sort all the atoms in the universe so O(1).”
That’s not how O notation works. It is about asymptomatics. It is about “What if it doesn’t crash”, “what if infinity”.
So, again, what exactly is your question if your answer is O(1)?
O notation has a precise definition. A function f : N -> R+ is said to be O(g(x)) (for some g : N -> R) if there exists a constant c so that f(n) <= cg(n) for all sufficiently large n. If f is bounded, then f is O(1).
Yeah, you’re right, I’m not being rigorous here. I’m just co-opting big O notation somewhat inaccurately to express that this isn’t going to get as big as it seems because the number of upvotes isn’t going to increase all that much.
It’s very pedantic, but he does have a point. Similar to how you could view memory usage as O(1) regardless of the algorithm used, just because a computer doesn’t have infinite memory, so it’s always got an upper bound on that.
Only that’s not helpful at all when comparing algorithms, so we disregard that quirk and assume we’re working with infinite memory.
That’s not how O notation works. I gave the other comment a longer answer
But you just completely ignored everything I said in that comment.
Mathematically, that is precisely how O notation works, only (as I’ve mentioned) we don’t use it like that to get meaningful results. Plus, when looking at time, we can actually use O notation like normal, since computers can indeed calculate something for infinity.
Still, you’re wrong saying that isn’t how it works in general, which is really easy to see if you look at the actual definition of O(g(n)).
Oh, and your computer crashing is a thing that could happen, sure, but that actually isn’t taken into account for runtime analysis, because it only happens with a certain chance. If it would happen after precisely three days every time, then you’d be correct and all algorithms would indeed have an upper bound for time too. However it doesn’t, so we can’t define that upper bound as there will always be calculations breaking it.
I hate anarchy chess so I welcome the effort to kill the server. >:-] muhahaha