• 0 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle
  • Could you let me know what sort of models you’re using? Everything I’ve tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.

    I’ve tried to get them to analyse code too, and that’s hit and miss at best, even with small programs. I’d have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It’s hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I’ve, personally, not seen any evidence that an LLM (which is what I’m assuming you’re referring to) could do anywhere near as well. I don’t doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can’t be sure what they’ve missed, and if you’re going to do that anyway, you’ve done the job yourself!

    Having said that, I’ve no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that’s actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.


  • I’m unlikely to do a full code audit, unless something about it doesn’t pass the ‘sniff test’. I will often go over the main code flows, the issue tracker, mailing lists and comments, positive or negative, from users on other forums.

    I mean, if you’re not doing that, what are you doing, just installing it and using it??!? Where’s the fun in that? (I mean this at least semi seriously, you learn a lot about the software you’re running if you put in some effort to learn about it)


  • ‘AI’ as we currently know it, is terrible at this sort of task. It’s not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that’s been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.





  • Before you can decide on how to do this, you’re going to have to make a few choices:

    Authentication and Access

    Theres two main ways to expose a git repo, HTTPS or SSH, and they both have pros and cons here:

    • HTTPS A standard sort of protocol to proxy, but you’ll need to make sure you set up authentication on the proxy properly so that only only thise who should have access can get it. The git client will need to store a username and password to talk to the server or you’ll have to enter them on every request. gitweb is a CGI that provides a basic, but useful, web interface.

    • SSH Simpler to set up, and authentication is a solved problem. Proxying it isn’t hard, just forward the port to any of the backend servers, which avoids decrypting on the proxy. You will want to use the same hostkey on all the servers though, or SSH will refuse to connect. Doesn’t require any special setup.

    Replication

    Git is a distributed version control system, so you could replicate it at that level, alternatively you could use a replicated file system, or a simple file based replication. Each has it’s own trade-offs.

    • Git replication Using git pull to replicate between repositories is probably going to be your most reliable option, as it’s the job git was built for, and doesn’t rely on messing with it’s underlying files directly. The one caveat is that, if you push to different servers in quick suscession you may cause a merge confict, which would break your replication. The cleanest way to deal with that is to have the load balancer send all requests to server1 if it’s up, and only switch to the next server if all the prior ones are down. That way writes will alk be going to the same place. Then set up replication in loop, with server2 pulling from server1, server3 pulling from server2, and so on up to server1 pulling from server5. With frequent pulls changes that are commited to server1 will quickly replicate to all the other servers. This would effectively be a shared nothing solution as none of the servers are sharing resources, which would make it easier to geigraphically separate them. The load balancer could be replaced by a CNAME record in DNS, with a daemon that updates it to point to the correct server.

    • Replicated filesystem Git stores its data in a fairly simple file structure, so placing that on a replicated filesystem such as GlusterFS or Ceph would mean multiple servers could use the same data. From experience, this sort of thing is great when it’s working, but can be fragile and break in unexpected ways. You don’t want to be up at 2am trying to fix a file replication issue if you can avoid it.

    • File replication. This is similar to the git replication option, in that you have to be very aware of the risk of conflicts. A similar strategy would probably work, but I’m not sure it brings you any advantages.

    I think my prefered solution would be to have SSH access to the git servers and to set up pull based replication on a fairly fast schedule (where fast is relative to how frequently you push changes). You mention having a VPS as obe of the servers, so you might want to push changes to that rather than have be able to connect to your internal network.

    A useful property of git is that, if the server is missing changesets you can just push them again. So if a server goes down before your last push gets replicated, you can just push again once the system has switched to the new server. Once the first server comes back online it’ll naturally get any changesets it’s missing and effectively ‘heal’.


  • notabot@lemm.eetoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    I manage all my homelab infra stuff via ansible and run services via kubenetes. All the ansible playbooks are in git, so I can roll back if I screw something up, and I test it on a sacrificial VM first when I can. Running services in kubenetes means I can spin up new instances and test them before putting them live.

    Working like that makes it all a lot more relaxing as I can be confident in my changes, and back them out if I still get it wrong.



  • How do such people program?

    They don’t. They used to copy and paste stuff they found on the internet, then when it doesn’t work they made a barely coherent post on Stack Exchange, or maybe the issue tracker of one of the packages they think they’re using. I suppose that nowadays they copy and paste whatever they get out of the LLM de jour, then try to tell it that it didn’t work, copy and paste the answer and repeat until it either compiles or they finally give up and post to an issue tracker.


  • I normally just use X forwarding over ssh. For simple, X native, apps it usually works nicely, but if you’re using something that draws its own UI (electron apps, browsers and the like often do) it’ll be extremely painful or just fail.

    I believe there is, or was, a way to run a vnc client rootless, but I think you needed to configure the server in a specific way too. It’s been a long time since I tried though so things will have changed.


  • find can be a bit slow because it enumerates every directory recursively from the root you specified, but it let’s you do a lot more than just search by name. locate is available on most distros and give fast results, albiet from when the index was last rebuilt (usually nightly). They both have the vital property that they output a list of files to stdout for further processing.





  • Trying to avoid using any arithmetic operators, and sticking just to binary (extending beyond 16 bit unsigned ints is left as an exercise for the interested reader):

    #!/usr/bin/perl
    
    # This increments $i
    
    my $i=1;
    print "Start: $i ";
    
    if (($i & 0b1111111111111111) == 0b1111111111111111) {die "Overflow";}
    if (($i & 0b0000000000000001) == 0b0000000000000000) {$i=(($i & 0b1111111111111110) | 0b0000000000000001);}
    else
    {
            if (($i & 0b0111111111111111) == 0b0111111111111111) {$i=(($i & 0b0000000000000000) | 0b1000000000000000);}
            if (($i & 0b0011111111111111) == 0b0011111111111111) {$i=(($i & 0b1000000000000000) | 0b0100000000000000);}
            if (($i & 0b0001111111111111) == 0b0001111111111111) {$i=(($i & 0b1100000000000000) | 0b0010000000000000);}
            if (($i & 0b0000111111111111) == 0b0000111111111111) {$i=(($i & 0b1110000000000000) | 0b0001000000000000);}
            if (($i & 0b0000011111111111) == 0b0000011111111111) {$i=(($i & 0b1111000000000000) | 0b0000100000000000);}
            if (($i & 0b0000001111111111) == 0b0000001111111111) {$i=(($i & 0b1111100000000000) | 0b0000010000000000);}
            if (($i & 0b0000000111111111) == 0b0000000111111111) {$i=(($i & 0b1111110000000000) | 0b0000001000000000);}
            if (($i & 0b0000000011111111) == 0b0000000011111111) {$i=(($i & 0b1111111000000000) | 0b0000000100000000);}
            if (($i & 0b0000000001111111) == 0b0000000001111111) {$i=(($i & 0b1111111100000000) | 0b0000000010000000);}
            if (($i & 0b0000000000111111) == 0b0000000000111111) {$i=(($i & 0b1111111110000000) | 0b0000000001000000);}
            if (($i & 0b0000000000011111) == 0b0000000000011111) {$i=(($i & 0b1111111111000000) | 0b0000000000100000);}
            if (($i & 0b0000000000001111) == 0b0000000000001111) {$i=(($i & 0b1111111111100000) | 0b0000000000010000);}
            if (($i & 0b0000000000000111) == 0b0000000000000111) {$i=(($i & 0b1111111111110000) | 0b0000000000001000);}
            if (($i & 0b0000000000000011) == 0b0000000000000011) {$i=(($i & 0b1111111111111000) | 0b0000000000000100);}
            if (($i & 0b0000000000000001) == 0b0000000000000001) {$i=(($i & 0b1111111111111100) | 0b0000000000000010);}
    }
    print "End: $i\n";
    

  • It’s a non-starter for me because I sync my notes, and sometimes a subset of my notes, to multiple devices and multiple programs. For instance, I might use Obsidian, Vim and tasks.md to access the same repository, with all the documents synced between my desktop and server, and a subset synced to my phone. I also have various scripts to capture data from other sources and write it out as markdown files. Trying to sync all of this to a database that is then further synced around seems overly complicated to say the least, and would basically just be using Trillium as a file store, which I’ve already got.

    I’ve also be burnt by various export/import systems either losing information or storing it in a incompatible way.


  • I like it, this is clearly very enterprisey and solution focused, but I would like to suggest a couple of amendments if I may?

    • Namespaces We should make full use of namespaces. Make the structural tags be in a language specific namespace (to be referenced in every function spec, obviously) but change the in an out params to use the parameter name as the tag, namespaced to the function they’re for, with a type attribute.

    • In memory message queues Have all function invocations be marshaled as xml documents posted to an in memory message queue. Said documents should use a schema that validates the structure and a function specific schema to validate the types of arguments being passed. Namespace everything.

    I reckon we could power a medium sided country if we could generate energy from the programmers despair.