

That’s an odd definition of non-public if the information is available to everyone. More like annoyingly public.
That’s an odd definition of non-public if the information is available to everyone. More like annoyingly public.
These include semgrep, ast-grep, LLMs, and one-off scripts. After running these tools on a large code-base, you usually end up with lots of additional unintended changes. These range from formatting/whitespace to unrequested modifications by LLMs.
Maybe LLMs do, but why would semgrep or your one-off script be making unrelated changes?
This is like using sed
to replace something and using grep
to filter out the very things you just specifically modified.
It should be unnecessary if you commit frequently enough and don’t do 10 different refactorings before starting to commit each one.
Visit takeout.google.com and select Google Podcasts to export your Google Podcasts data in OPML format.
I wonder when that became a thing. I migrated at the beginning of the year and had to manually add all subscriptions because Google Podcasts had no way of exporting anything.
I guess someone made a GDPR complaint.
The proof of work is the commit content itself! Unlike some arbitrary brute force task of no value.
People just nuke a local git repo and reclone if something goes wrong. There isn’t even an attempt to understand anything.
Yeah, titlegore material.
Pascal is a simpler and more limited language, so it’s not entirely surprising. It also has less and smaller standard libraries to link in.
As to C# and F#, what’s wrong with the difference? The functional coding style of F# prefers immutable data over possibly mutable ones in C# and that requires more allocations and garbage collection.
That allows processes to be reused but the interpreter must still be set up cleanly each time and torn down. That includes things like open files, database connections and application configurations.
Or you could just write your backend in JS and save a lot according to this table.
The big inefficiency of PHP is that every request is handled in a new instance as opposed to one process running the server continuously.
So it has the intended effect.
Log files themselves don’t, but I’m just comparing it with simpler files with simpler structure with simpler algorithms with better complexity.
It’s not necessarily about the load, it’s about the algorithmic complexity. Going from lists (lines in a file, characters in a line) to trees introduces a potentially exponential increase in complexity due to the number of ways the same list of elements can be organized into a tree.
Also, you’re underestimating the amount of processing. It’s not about pure CPU computations but RAM access or even I/O. Even existing non-semantic diff implementations are unexpectedly inadequate in terms of performance. You clearly haven’t tried diffing multi-GB log files.
How do you expect it to be shown though?
Because text is text and all }
are the same.
Diffing algorithms on trees might not be as efficient, especially if they have to find arbitrary node moves.
Automatically determining a semantic version bump (based on the types of commits landed).
That’s overly optimistic. It’ll be wrong the moment one person forgets one exclamation mark in one commit message. And it might not even be their fault if it’s not clear at the time it causes breaking changes somewhere downstream.
You can’t replace proper release engineering.
IT can look up the original (including all headers) based on the forwarded content. It’s on the same mail server.
They might not have always been empty. Could be that there was a branch for a PR that got merged, so the branch was deleted.
And git merge --no-commit
to do whatever you want on the proposed merge before actually creating the commit. Test or whatever else.
The company didn’t abandon, Microsoft bought them out indirectly and killed the competition. Nothing to see here.