There is a YouTube video in Servo’s homepage.
The first minutes of that video answer your question.
There is a YouTube video in Servo’s homepage.
The first minutes of that video answer your question.
A reminder that the Servo project has resumed active development since the start of 2023, and is making good progress every month.
If you’re looking for a serious in-progress effort to create a new open, safe, performant, independent, and fully-featured web engine, that’s the one you should be keeping an eye on.
It won’t be easy trying to catch up to continuously evolving and changing web standards, but that’s the only effort with a chance.
I for one am happy we’re getting an alternative to the Chrome/Firefox duality we’re stuck with.
Anyone serious about that would be sending their money towards Servo, which resumed active development since the start of 2023, and is making good progress every month.
I would say nothing but “Good Luck” to other from-scratch efforts, but It’s hard not to see them as memes with small cultist followings living on hope and hype.
My post was a showcase of why there is no substitute for knowing your tools properly, and how when you know them properly, you will never have to wait for 5 minutes, let alone 5 years, for anything, because you never used or needed to use an IDE anyway.
This applies universally. No minimum smartness or specialness scores required.
Not sure how what I write is this confusing to you.
test
.test
is not necessarily all tests.cargo expand
gives you options for correctly and coherently expanding Rust code, and doesn’t expand tests by default.rg
was half a joke since it’s Rust’s grep. You can just pipe cargo expand [OPTIONS] [ITEM]
output to vim '+set ft=rust' -
or bat --filename t.rs
and search from there.What part are you struggling with?
The ripgrep (rg
) part, or the cargo expand
part?
You two bring shame to the programming community.
Just ripgrep cargo expanded output for f**** sake.
I thought I saw this weeks ago.
May 21, 2024
yep
Anyway, neovim+rust-analyzer+ra-multiplex is all I need.
Not to minimize their work, which is actually amazing!
you wouldn’t know because…
Still based on GNOME.
you don’t have a single clue about what they are actually doing.
that’s not what I’m looking for when I’m looking at a backtrace. I don’t mind plain unwraps or assertions without messages.
You’re assuming the PoV of a developer in an at least partially controlled environment.
Don’t underestimate the power of (preferably specific/unique) text. Text a user (who is more likely to be experiencing a partially broken environment) can put in a search engine after copying it or memorizing it. The backtrace itself at this point is maybe gone because the user didn’t care, or couldn’t copy it anyway.
Don’t get angry with me my friend. We are more in agreement than not re panics (not .unwrap()
, another comment coming).
Maybe I’m wrong, but I understood ‘literally’ in ‘literally never’ in the way young people use it, which doesn’t really mean ‘literally’, and is just used to convey exaggeration.
But why can’t we fight to make Rust better and be that “good enough” tool for the next generation of plasma physicists, biotech researchers, and AI engineers?
Because to best realize and appreciate Rust’s added value, one has to to be aware, and hindered by, the problems Rust tries to fix.
Because Rust expects good software engineering practices to be put front and center, while in some fields, they are a footnote at best.
Because the idea of a uni-language (uni- anything really) is unattainable, not because the blasé egalitarian “best tool for the job” mantra is true, but because “best tool” from a productive PoV is primarily a question of who’s going to use it, not the job itself.
Even if a uni-language was the best at everything, that doesn’t mean every person who will theoretically use it will be fit, now or ever, to maximize its potential. If a person is able to do more with an assumed worse tool than he does with a better one, that doesn’t necessarily invalidate the assumption, nor is it necessarily the fault of the assumed better tool.
Rust’s success is not a technical feat, but rather a social one
fighting the urge to close tab
Projects like Rust-Analyzer, rustfmt, cargo, miri, rustdoc, mdbook, etc are all social byproduct’s of Rust’s success.
fighting much harder
LogLog’s post makes it clear we need to start pushing the language forward.
One man’s pushing the language forward is another man’s pushing the language backward.
A quick table of contents
Stopped here after all the marketing talk inserted in the middle.
May come back later.
Side Note: I don’t know what part of the webshit stack may have caused this, but selecting text (e.g. by triple-clicking on a paragraph) after the page is loaded for a while is broken for me on Firefox. A lot of errors getting printed in the JS console too. Doesn’t happen in a Blinktwice browser.
From my experience, when people say “don’t unwrap in production code” they really mean “don’t call panic! in production code.” And that’s a bad take.
What should be a non-absolutest mantra can be bad if applied absolutely. Yes.
Annotating unreachable branches with a panic is the right thing to do; mucking up your interfaces to propagate errors that can’t actually happen is the wrong thing to do.
What should be a non-absolutest mantra can be bad if applied absolutely.
(DISCLAIMER: I haven’t read the post yet.)
For example, if you know you’re popping from a non-empty vector, unwrap is totally the right too(l) for the job.
That would/should be .expect()
. You register your assumption once, at the source level, and at the panic level if the assumption ever gets broken. And it’s not necessarily a (local) logical error that may cause this. It could be a logical error somewhere else, or a broken running environment where sound logic is broken by hardware or external system issues.
If you would be writing comments around your .unwrap()
s anyway (which you should be), then .expect()
is a strictly superior choice.
One could say .unwrap()
was a mistake. It’s not even that short of a shortcut (typing wise). And the maximumly lazy could have always written .expect("")
instead anyway.
You should have mentioned OP.
@fil
/// # Panics
///
/// - if `samples.len()` does not match the `sample_count` passed to [Self::new]
/// - if there are `NaN`s in the sample slice
Since this is library code, why not make the function return a Result
?
start a process within a specific veth
That sentence doesn’t make any sense.
Processes run in network namespaces (netns), and that’s exactly what ip netns exec
does.
A newly created netns via ip netns add
has no network connectivity at all. Even (private) localhost is down and you have to run ip link set lo up
to bring it up.
You use veth
pairs to connect a virtual device in a network namespace, with a virtual device in the default namespace (or another namespace with internet connectivity).
You route the VPN server address via the netns veth device and nothing else. Then you run wireguard/OpenVPN inside netns.
Avoid using systemd since it runs in the default netns by default, even if called from a process running in another netns.
The way I do it is:
ns_con AA
ip netns exec
):ns_run AA <cmd>
export DISPLAY=:0 # for X11
export XDG_RUNTIME_DIR=/run/user/1000 # to connect to already running pipewire...
# double check this is running in AA ns
tmux -f -f <alternative_config_file_if_needed> -L NS_AA
I have this in my tmux config:
set-option -g status-left "[#{b:socket_path}:#I] "
So I always know which socket a tmux session is running on. You can include network info there if you’re still not confident in your setup.
Now, I can detach that tmux session. Reattaching with tmux -L NS_AA attach
from anywhere will give me the session still running in AA
.
You don’t even need full-fledged containers for that btw.
Learn how to script with ip netns
and veth
.
(putting my Rust historian hat on)
Even the name stdx[1][2] is not original.
It was one of multiple attempts to officially or semi-officially present a curated a list of crates. Thankfully, all these attempts failed, as the larger community pushed against them, and more relevantly, as the swarm refused to circle around any of them.
This reminds of a little-known and long-forgotten demo tool named
cargo-esr
[1][2]. But it’s not the tool, but the events it was supposedly created as a response to that is worth a historical mention, namely these blog posts[1][2], and the commotion that followed them[1][2][3][4].For those who were not around back then, there was an obscure crate named
mio
, created by an obscure developer named Carl Lerche, that was like the libevent/libuv equivalent for Rust.mio
was so obscure I actually knew it existed before Rust even hit v1.0. Carl continued to do more obscure things liketokio
, whatever that is.So, the argument was that there was absolutely no way whatsoever that one could figure out they needed to depend on
mio
for a good event loop interface. It was totally an insurmountable task!That was the circus, and “no clown left behind” was the mindset, that gave birth to all these std-extending attempts.
So, let’s fast forward a bit. NTPsec didn’t actually get (re)written in go, and ended up being a trimming, hardening, and improving job on the original C impl. The security improvements were a huge success! Just the odd vulnerability here and there. You know, stuff like NULL dereferences, buffer over-reads, out-of-bounds writes, the kind of semantic errors Rust famously doesn’t protect from 🙂
To be fair, I’m not aware of any big NTP implementations written in Rust popping up around that time either. But we do finally have the now-funded
ntpd-rs
effort progressing nicely.And on the crates objective metrics front, kornel of lib.rs fame, started and continues to collect A LOT of them for his service. Although, he and lib.rs are self-admittedly NOT opinion-free.
DISCLAIMER: I didn’t even visit OP’s link.