• 7 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle



  • And the article content posted is just an excerpt. The rest of the article focuses on how AI can improve the efficiency of workers, not replace them.

    Ideally, you’ve got a learned individual using AI to process data more efficiently, but one that is smart enough to ignore or toss out the crap and knows to carefully review that output with a critical eye. I suspect the reality is that most of those individuals using AI will just pass it along uncritically.

    I’m less worried about employees scared of AI and more worried about employees and employers embracing AI without any skepticism.







  • As a guy responsible for a 1,000 employee O365 tenant, I’ve been watching this with concern.

    I don’t think I’m a target of state actors. I also don’t have any E5 licenses.

    I’m disturbed at the opaqueness of MS’ response. From what they have explained, it sounds like the bad actors could self-sign a valid token to access cloud resources. That’s obviously a huge concern. It also sounds like the bad actors only accessed Exchange Online resources. My understanding is they could have done more, if they had a valid token. I feel like the fact that they didn’t means something’s not yet public.

    I’m very disturbed by the fact that it sounds like I’d have no way to know this sort of breach was even occurring.

    Compared to decades ago, I have a generally positive view of MS and security. It bothers me that this breach was a month in before the US government notified MS of it. It also bothers me that MS hasn’t been terribly forthcoming about what happened. Likely, there’s no need to mention I’m bothered that I’m so deep into the O365 environment that I can’t pull out.


  • Nice job. Packet loss will definitely cause these issues. Now, you just need to find the source of the packet loss.

    In your situation, I’d first try to figure out if it is ISP/Internet before looking inside either network. I wouldn’t expect it to be internal at these speeds. Though, did you get CPU/RAM readings on the network equipment during these tests? Maxing out either can result in packet loss.

    I’d start with two pairs of packet captures when the issue happened: endpoint to endpoint and edge router to edge router. Figure out if the packet loss is only happening in one direction or not. That is, are all the UK packets reaching DE but not all the DE making it back? You should clearly be able to narrow into a TCP conversation with dropped packets. Dropped packets aren’t ones that a system never sent, they’re ones that a system never received. Find some of those and start figuring out where the drop happened.



  • If the bandwidth numbers you’ve described are accurate, I’d start looking at CPU and RAM usage on the network device. The Fortigates are going to be doing extra work to handle the VPN. I wouldn’t expect an IPSEC VPN on a Fortigate to top out at 10mbps, but if it’s doing a lot of other work, it’s possible. ACL’s on the Cisco devices? You run the potential of CPU/RAM exhaustion on those. Hopefully, you have remote monitoring on all network devices and you can just look at the history when these transfers are happening.

    If nothing obvious there, then I’d try packet captures when this is happening, perhaps to start on the system doing the ssh and on one or two others experiencing issues. What are you seeing? Evidence of dropped packets? High latency? If dropped packets, start capturing the same traffic on the network devices it’s flowing through.


  • Does the GPL cover having to give redistribution rights to the exact same code used to replicate a certain build of a product?

    It does, and very explicitly and intentionally. What it doesn’t say is that you have to make that source code available publically, just that you have to make it available to those you give or sell the binary to.

    What Red Hat is doing is saying you have the full right to the code, and you have the right to redistribute the code. However, if you exercise that right, we’ll pull your license to our binaries and you lose access to code fixes.

    That’s probably legal under the GPL, though smarter people than me are arguing it isn’t. However, if those writing GPLv2 had thought of this type of attack at the time, I suspect it wouldn’t be legal under the GPL.



  • I believe you are correct. Any paying Red Hat customer consuming GPL code has the right to redistribute that code. What Red Hat seems to be suggesting is that if you exercise that right, they’ll cut you as a customer, and thus you no longer have access to bug fixes going forward.

    I suspect it’s legal under the GPL. I’m certain it violates the spirit of the GPL.


  • I am not a lawyer, but I have been a follower of FLOSS projects for a long time.

    Me too. I know what I’m suggesting is functionally impossible. I’m wondering if it could be done in compliance with the GPL.

    All of those contributors have done so using language that says GPLv2 or higher. Specifically says you can modify or redistribute under GPLv2 or later versions. So nothing stops the Linux Foundation from asking new contributors to contribute under the GPLv4 and then releasing the combined work of the new kernel under GPLv4.

    The old code would still be available under the GPLv2, but I suspect subsequent releases could be released under a later version and still comply with original contributions.

    Again, I know it won’t happen, just like I believe Red Hat’s behavior is within the rules of the GPL. I’d love to hear arguments as to how Red Hat is violating the GPL or reasons why the kernel couldn’t be released under GPLv3 or higher.



  • Upvotes and downvotes.

    Right now, I can browse by New on my subscribed communities and see every post since the last time I did that.

    I can view or re-view posts and read every response. If the responses are legion, I can play with hot/top and get the meat of the discussion.

    Did you notice that last sentence? On the few posts where there are too many responses to view all, I’ll try to get at those that are relevant.

    If the Lemmy community grows large enough, I’ll need to do the same for posts. I will no longer be able to regularly view by new and have time to see everything.

    So, I’ll need to rely on some sorting method to make certain I see relevant stuff.

    Someone with millions of bots that never post have millions of upvotes and downvotes to influence the score used by the sorting algorithm that I’ll use to decide what to read.