I take things from point A to point B
thanks
If you’re already on a Linux-based operating system, and you gotta run a real instance of Windows for some reason, your safest bet from both a security and privacy standpoint is to run it in a virtual machine (I like VirtualBox, personally, but VMWare, or whatever else will do the job fine also) and firewall the hell out of it. In a virtual machine, you can totally lock it down as much or as little as you need for the task at hand, and ain’t a damned thing Windows itself can really do about it, and as an added bonus, it saves you from the required reboots of dual-booting. It’s confined to a “safe space” (until you start opening enabling network stuff and opening ports to it). You’re in control.
edit: or QEMU/KVM (with virt-manager)
Really you’d have to fire up Wireshark and see what telemetry Windows was blabbing away behind your back. Analysing those logs can be a tedious business, especially as you’d need a large dataset.
Thing with just about any tech related question posted is likely some geek will have done the heavy lifting for you already. Here is a nice start:
https://www.zdnet.com/article/windows-10-and-telemetry-time-for-a-simple-network-analysis/
Here is another one:
https://www.comparitech.com/blog/information-security/windows-10-data/
That’s logs required to be collected, doesn’t say whether or not the data is sent back to Windows. Best assume yes.
Course, all that proprietary software will have a voluminous licence agreement that nobody reads. They’ll collect as much data as they can to “maximise user experience” or whatever rubbish.
Pro is a little bit better because of features like Bitlocker. A lot better would be Education/Enterprise variant. You’d need special licenses for running enterprise I think. There are also registry hacks that would give you some protection against telemetry (I personally haven’t done this).
Privacy-wise though, any “windows” is going to fare lower than linux is what I’d say. Wait for others in the sub for more insights.
There are two main aspects to coreboot in my opinion that differentiate it from other firmware ecosystems:
The first is a strong push towards having a single code base for lots of boards (and, these days, architectures). Historically, most firmware is build in a model I like to call “copy&adapt”: The producer of a device picks the closest reference code (probably a board support package), adapts it to work with their device, builds the binary and puts it on the device, then moves to the next device.
Maintenance is hard in such a setup: If you find a bug in common code you’ll have to backport the fix to all these copies of the source code, hope it doesn’t break anything else, and build all these different trees. Building a 5 year old coreboot tree on a modern OS is quite the exercise, but many firmware projects are near impossible to build under such circumstances.
With coreboot, we encourage developers to push their changes to the common tree. We maintain it there, but we also expect the device owner (either the original developer or some interested user) in helping with that, at least with testing but more ideally with code contributions to keep it up to current standards of the surrounding code. A somewhat maintained board is typically brought up to latest standards in less than a day if a new build is required, and that means that everybody has an easy time to do a new build when necessary.
The second aspect is our separation of responsibilities: Where BIOS mandates the OS-facing APIs and not much else (with lots of deviation in how that standard is implemented), UEFI (and other projects like u-boot) tends to go the other extreme: with UEFI you buy into everything from build system, boot drivers, OS APIs and user interface. If you need something that only provides 10% of UEFI you’ll be having a hard time.
With coreboot we split responsibilities between 2 parts: coreboot does the hardware initialization (and comes with its build system for the coreboot part, and drivers, but barely any OS APIs and no user interface). The payload is responsible for providing interfaces to the OS and user (and we can use Tianocore to provide a UEFI experience on top of coreboot’s initialization, or seabios, grub2, u-boot, Linux, or any program you build for the purpose of running as payload).
The interface between coreboot and the payload is pretty minimal: the payload’s entry point is well-defined, and there’s a data table in memory that describes certain system properties. In particular the interface defines no code to call into (including: no drivers), which we found complicates things and paints the firmware architecture into a corner.
To help payload developers, coreboot also provides libpayload, a set of minimal libraries implementing libc, ncurses and various other things we found useful, plus standard drivers. It’s up to each coreboot user/vendor if they want to use that or rather go for whatever else they want.
credit: [deleted] user on Reddit.