Wouldn’t THIS be a better link?
I get why ppl would use something other than github, but why do they have to torture me with gitlab?
What’s wrong with GitLab?
It has light mode by default and a UI that I find to be really unintuitive, but what really bothers me is that ppl go from one for-profit git host to another for-profit git host when things like Codeberg exist. With GitHub you could at least argue that you can turn your hobby project into a job since it has a huge userbase and stuff like github sponsors, but what does gitlab offer for you?
TL;DR: It’s not Codeberg
GitLab is open source and you can self-host it.
How is that relevant if I’m talking about someone hosting their code on gitlab.com?
You asked what GitLab offered and I answered that question. I ran GitLab at work for years. Amazing project. Much value there.
GitLab is still a commercial entity, and looking for buyers I understand. Plex was once open source, but guess why everyone recommends Jellyfin now.
Gitlab is a security nightmare. They have zero conception how to write secure code and they don’t care to learn.
I was looking for a link to the previous CVEs I was aware of and there is yet another one that is new to me: https://thehackernews.com/2024/09/urgent-gitlab-patches-critical-flaw.html
This is not a serious service to be hosting source code on.
Not if you want to promote a website.
The project is cool, but I am even more annoyed with articles that tell me what I think or want than I am with articles that used words like SLAMMED to make a mountain out of a molehill.
There are so many better ways to write that headline with the same sentiment. For example: “An open source mirrorless camera is going to be a big hit.”
My immediate, visceral reaction to that headline was, “no I wouldn’t” before I even opened it. I opened it anyway because it sounded cool, but don’t tell me what I would want to use.
Watch @FlyingSquid@lemmy.world destroy hackaday!
With camera sensors being so good, the major differences will be autofocus capabilities.
Imagine an open source autofocus algorithm that people can use their own photos locally so that it can focus on your shooting style.
Does this sensor have AF pixels? Otherwise it’ll be hard to get good AF unless you put a traditional AF in? Contract based AF is always going go be terrible.
This will be multi-kilobucks but best wishes. There was a series of GPL cameras some years back (I’m spacing on the name) but they used smaller sensors and were more video oriented. Anyone remember?
Edit: I remember now. It was elphel.com and it appears to still be around.
Yeah, it seems the sensor costs as much as a decent used camera.
And it’s not a great sensor to put it mildly.
I haven’t had a CCD camera since my Pentax K10D. That’s now old enough to drink.
Lately I’ve been browsing MPB used camera market for an upgrade to my decade old Canon clunker because they want to charge a monthly fee to use it with my PC.
Its nice to see progress made for Open Source cameras but I don’t see it being competittive with used cameras price-wise.
Say whaaaa?
Canon wants to charge for their webcam utility?
Yes, basic 720p is free but if you have an old 1080p camera then you gotta pay up or deal with lower res or unclean hdmi out.
There are no open source drivers for Canons.
Do you mean specifically webcam drivers? Because Magic Lantern still works as far as I know?
I’ll test it out when I get back and let you know if thats true.
EDIT: Just remembered I gave the old camera to a relative because I was planning to upgrade lmao. Where were you months ago?!
This is super interesting, and a project I’m gonna keep an eye on. Not least of all because I’ve got a good selection of E-mount lenses.
One thing that’s gonna be a struggle is all the specific lens corrections in photo software obviously will not be present for this. I wonder if the body behaves optically similarly enough to an existing Sony camera to be able to reuse those profiles.
I’d think they’d handle this with calibration. It doesn’t need to be as sexy as commercial, it just needs to have a reasonably easy process to fix it.
Something like when you get a new lense, you aim it at a laser difraction pattern on a clean wall.
Now you don’t worry about minors differences in body or lenses.
You aim it at what? Who has that?
It would be super cheap to make a laser difraction grid. You could map the lense deformation because you know the lines on the grid are straight. This would be solely for mapping the properties of the lens / mount and how to handle defamation profiles. Once you dial in the lens you probably wouldn’t need to run it again assuming it can id the lens when you mount it.
I would say you could use red green and blue lasers and look at convergence, But I’m not sure in any decent hardware that that would actually be off
Edit: you should note, iPhone already does this for face ID. It’s not really that much of a stretch to make it go the other way.
“It would be” so you haven’t done this but speak confidently about it being cheap and accessible?
You can purchase laser pointers with grid diffraction grating right now with zero effort to DIY.
You can purchase house decoration style diffraction gratings which are a larger format but are intensely bright. They are however less portable.
You can follow a thought emporiums instructions on how to create diffraction gratings, which includes the software and the process,
And yes, I already own a 300 milliwatt laser with a diffraction pattern that would work for this.
And these things produce actually perpendicular lines?
They have an incredibly high degree of accuracy. It’s the same thing iPhones are using for face ID. And if you needed it to be easier it doesn’t have to be straight lines as long as it’s dots in known locations
If the sensor is the same size the lens corrections should be identical. Now if it communicates focal length info into the metadata (on a zoom lens), or any data for that matter, that’s a different issue
I believe focal length & aperture EXIF metadata do factor into modern lens correction profiles
It’s worth highlighting that the profiles are typically based on the combination of a lens and a body, one lens used on two different camera bodies would result in two different profiles being used
Dude thats sweet as hell
I actually like mirrors.
Please, consider the vampires.
You can’t see them through the view finder, but they are in the picture.