• 0 Posts
  • 115 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • A writer friend I have says that if she were looking at just her own financial security, she’s super grateful for AI, because she’s pivoted into fixing AI written articles from places that laid off all their human writers. Being a contractor, her hourly rate is way higher than times when she’s been employed full time as a writer, plus it takes way longer to rewrite a broken article than it would’ve done to just write a decent article from scratch (and they insist that they want her to fix the AI articles, not rewrite them from scratch. I assume this is because the higher ups have their heads so far up their arses that they’re not willing to acknowledge that they shouldn’t have laid off the humans).

    The work isn’t as fulfilling as proper writing, but she’s getting paid so much compared to before that she’s able to work less than she was before, and still has money to put into savings. She’s still living super frugally, as if she were still a typical, struggling writer, because she was expecting that this wouldn’t last for very long, but she’s been at this for quite a while now (with a surprising amount of repeat business). She thought for sure that work would begin to dry up once the financial year ended and companies went “holy shit, why are we spending so much on contractors?”, but last we spoke, it was still going strong.

    I’m glad that at least someone human is making bank off of this. And if it was to be anyone who lucks into this, I’m glad that it’s someone who has the extremely poor fortune to be laid off 4-5 times in one year (and this was pre-AI — she was just super unlucky)


  • I’m a biochemist who got into programming from the science side of it, and yeah, code written by scientists can be pretty bad. Something that I saw a lot in my field was that people who needed some code to do something as part of a larger project (such as adding back on the hydrogens to a 3d protein structure from the protein database) would write the thing themselves, and not even consider the possibility that someone else has probably written the same thing, but far better than they be can, and made it available open source. This means there’s a lot of reinventing the wheel by people who are not wheel engineers.

    I find it so wild how few scientists I’ve spoken to about this stuff understand what open-source code actually means in the wider picture. Although I’ve never spoken to a scientist in my field who doesn’t know what open source means at all, and pretty much all of them understand open source software as being a good thing, this is often a superficial belief based purely on understanding that proprietary software is bad (I know someone who still has a PC running windows 98 in their lab, because of the one piece of essential equipment that runs on very old, proprietary code that isn’t supported anymore).

    Nowadays, I’m probably more programmer than biochemist, and what got me started on this route was being aware of how poor the code I wrote was, and wanting to better understand best practices to improve things like reliability and readability. Going down that path is what solidified my appreciation of open source — I found it super useful to try to understand existing codebases, and it was useful practice to attempt to extend or modify some software I was using. The lack of this is what I mean by “superficial belief” above. It always struck me as odd, because surely scientists of all people would be able to appreciate open source code as a form of collaborative, iterative knowledge production







  • Indeed, that is the healthier way to go about things.

    Personally, I struggle with that kind of compartmentalisation, but I would probably be healthier if I could do that. I have never lasted long when doing work that I’m not passionate about, and when I am passionate about work, it’s hard to not bring it home (even if that’s just working on stuff adjacent to the task).

    I know a lot of people who work in academia, and it’s simultaneously inspiring and depressing to see how people’s research interests end up bleeding into basically all elements of their regular life. I think some people are just wired that way. I wish that they had the freedom to engage in that in a more healthy way, free from the additional bullshit that Capitalism heaps onto them, making the dynamic so toxic.

    However, given that we do live under such oppressive economic conditions, “work to live, not live to work” is an essential mantra to aspire towards, especially the people who put their whole heart into their work. It’s not ideal, but it is necessary to learn if we want to survive without burning out.


  • Something I find cool about this book is that it’s so well known that people who haven’t even read it will often gesture towards it to make a point. It reminds me of how “enshittification” caught on because so many people were glad to have a word for what they’d been experiencing.

    It’s a useful phrase to have. Recently a friend was lamenting that they’d had a string of bad jobs, and they were struggling to articulate what it was that they wanted from a job. They were at risk of blaming themselves for the fact that they’d struggled to find anything that wasn’t soul sucking, because they were beginning to doubt whether finding a fulfilling job was even possible.

    They were grasping at straws trying to explain what would make them feel fulfilled, and I cut in to say “all of this is basically just saying you don’t care what job you have, as long as it’s a non-bullshit job”. They pondered it for a moment before emphatically agreeing with me. It was entertaining to see their entire demeanour change so quickly: from being demoralised and shrinking to being defiant and righteously angry at the fucked up world that turns good jobs into bullshit. Having vocabulary to describe your experiences can be pretty magical sometimes


  • I don’t have any specific examples, but the standard of code is really bad in science. I don’t mean this in an overly judgemental way — I am not surprised that scientists who have minimal code specific education end up with the kind of “eh, close enough” stuff that you see in personal projects. It is unfortunate how it leads to code being even less intelligible on average, which makes collaboration harder, even if the code is released open source.

    I see a lot of teams basically reinventing the wheel. For example, 3D protein structures in the Protein Database (pdb) don’t have hydrogens on them. This is partly because that’ll depend a heckton on the pH of the environment that the protein is. Aspartic acid, for example, is an amino acid where its variable side chain (different for each amino acid) is CH2COOH in acidic conditions, but CH2COO- in basic conditions. Because it’s so relative to both the protein and the protein’s environment, you tend to get research groups just bashing together some simple code to add hydrogens back on depending on what they’re studying. This can lead to silly mistakes and shabby code in general though.

    I can’t be too mad about it though. After all, wanting to learn how to be better at this stuff and to understand what was best practice caused me to go out and learn this stuff properly (or attempt to). Amongst programmers, I’m still more biochemist than programmer, but amongst my fellow scientists, I’m more programmer than biochemist. It’s a weird, liminal existence, but I sort of dig it.


  • Useful context: I am a biochemist with a passing interest in neuroscience (plus some friends who work in neuroscience research).

    A brief minor point is that you should consider uploading the preprint as a pdf instead, as .docx can cause formatting errors if people aren’t using the same word processor as you. Personally, I saw some formatting issues related to this (though nothing too serious).

    Onto the content of your work, something I think your paper would benefit from is linking to established research throughout. Academia’s insistence on good citations throughout can feel like it’s mostly just gatekeeping, but it’s pretty valuable for demonstrating that you’re aware of the existing research in the area. This is especially important for research in a topic like this tends to attract a lot of cranks (my friends tell me that they fairly frequently get slightly unhinged emails from people who are adamant that they have solved the theory of consciousness). Citations throughout the body of your research makes it clear what points are your own, and what is the established research.

    Making it clear what you’re drawing on is especially important for interdisciplinary research like this, because it helps people who know one part of things really well, but don’t know much about the others. For example, although I am familiar with Friston’s paper, I don’t know what has happened in the field since then. I also know some information theory stuff, but not much. Citations are way of implicitly saying “if you’re not clear on where we’re getting this particular thing from, you can go read more here”.

    For example, if you have a bit that’s made up of 2 statements:

    • (1): Something that’s either explicitly stated in Friston’s paper, or is a straightforwardly clear consequence of something explicitly stated
    • (2): Something that your analysis is adding to Friston’s as a novel insight or angle

    Then you can make statement 2 go down far easier if that first statement. I use Friston in this example both because I am familiar with the work, but also because I know that that paper was somewhat controversial in some of its assumptions or conclusions. Making it clear what points are new ones you’re making vs. established stuff that’s already been thoroughly discussed in its field can act sort of like a firebreak against criticism, where you can have the best of both worlds of being able to build on top of existing research while also saying “hey, if you have beef with that original take, go take it up with them, not us”. It also makes it easier for someone to know what’s relevant to them: a neuroscientist studying consciousness who doesn’t vibe with Friston’s approach would not have much to gain from your paper, for instance.

    It’s also useful to do some amount of summarising the research you’re building on, because this helps to situate your research. What’s neuroscience’s response to Friston’s paper? Has there been much research building upon it? I know there have been criticisms against it, and that can also be a valid angle to cover, especially if your work helps seal up some holes in that original research (or makes the theory more useful such that it’s easier to overlook the few holes). My understanding is that the neuroscientific answer to “what even is consciousness?” is that we still don’t know, and that there are many competing theories and frameworks. You don’t need to cover all of those, but you do need to justify why you’re building upon this particular approach.

    In this case specifically, I suspect that the reason for building upon Friston is because part of the appeal of his work is that it allows for this kind of mathsy approach to things. Because of this, I would expect to see at least some discussion of some of the critiques of the free energy principle as applied to neuroscience, namely that:

    • The “Bayesian brain” has been argued as being an oversimplification
    • Some argue that the application of physical principles to biological systems in this manner is unjustified (this is linked to the oversimplification charge)
    • Maths based models like this are hard to empirically test.

    Linked to the empirical testing, when I read the phrase “yielding testable implications for cognitive neuroscience”, I skipped ahead because I was intrigued to see what testable things you were suggesting, but I was disappointed to not see something more concrete on the neuroscience side. Although you state

    “The values of dI/dT can be empirically correlated with neuro-metabolic and cognitive markers — for example, the rate of neural integration, changes in neural network entropy, or the energetic cost of predictive error.”

    that wasn’t much to go on for learning about current methods used to measure these things. Like I say, I’m very much not a neuroscientist, just someone with an interest in the topic, which is why I was interested to see how you proposed to link this to empirical data.

    I know you go more into depth on some parts of this in section 8, but I had my concerns there too. For instance, in section 8.1, I am doubtful of whether varying the temporal rate of novelty as you describe would be able to cause metabolic changes that would be detectable using the experimental methods you propose. Aren’t the energy changes we’re talking about super small? I’d also expect that for a simple visual input, there wouldn’t necessarily be much metabolic impact if the brain were able to make use of prior learning involving visual processing.

    I hope this feedback is useful, and hopefully not too demoralising. I think your work looks super interesting and the last thing I want to do is gatekeep people from participating in research. I know a few independent researchers, and indeed, it looks like I might end up on that path myself, so God knows I need to believe that doing independent research that’s taken seriously is possible. Unfortunately, to make one’s research acceptable to the academic community requires jumping through a bunch of hoops like following good citation practice. Some of these requirements are a bit bullshit and gatekeepy, but a lot of them are an essential part of how the research community has learned to interface with the impossible deluge of new work they’re expected to keep up to date on. Interdisciplinary research makes it especially difficult to situate one’s work in the wider context of things. I like your idea though, and think it’s worth developing.


  • Something that I’m disproportionately proud of is that my contributions to open source software are a few minor documentation improvements. One of those times, the docs were wrong and it took me ages to figure out how to do the thing I was trying to do. After I solved it, I was annoyed at the documentation being wrong, and fixed it before submitting a pull request.

    I’ve not yet made any code contributions to open source, but there have been a few people on Lemmy who helped me to realise I shouldn’t diminish my contribution because good documentation is essential, but often neglected.




  • For a while, I was subscribed as a patron to Elisabeth Bik’s Patroeon. She’s a microbiologist turned “Science Integrity Specialist” which means she investigates and exposes scientific fraud. Despite doing work that’s essential to science, she has struggled to get funding because there’s a weird stigma around what she does; It’s not uncommon to hear scientists speak of people like her negatively, because they perceive anti-fraud work as being harmful to public trust in science (which is obviously absurd, because surely recognising that auditing the integrity of research is necessary for building and maintaining trust in science).

    Anyway, I mention this because it’s one of the most dystopian things I’ve directly experienced in recent years. A lot of scientists and other academics I know are struggling financially, even though they’re better funded than she is, so I can imagine that it’s even worse for her. How fucked up is it for scientific researchers to have to rely on patrons like me (especially when people like me are also struggling with rising living costs).


  • It does mean something to them, but not in a way that will stop you from getting laid off; what it means is that after laying you off, they’ll quickly come to regret it and scramble to try to fill the knowledge gap they now have. I know a few people who were called up by the company basically begging them to help. A couple of people I know were able to leverage this to get a short term position contracting (at exorbitantly higher rates than their salary way), and a few others instead just cackled in schadenfreude.



  • I have a random question, if you would indulge my curiosity: why do you use ‘þ’ in place of ‘th’? It’s rare that I see people using thorn in a modern context, and I was wondering why you would go to the effort?

    (þis question brought to you by me reflecting on your use of þorn, and specifically how my initial instinctual response was to be irked because it makes þings harder to read (as someone who isn’t used to seeing ‘þ’). However, I quickly realised þat being challenged in þis way is one of þe þings I value about conversations on þis platform, and I decided þat being curious would be much more fun and interesting than being needlessly irritable (as it appears some oþers opt to be, given how I sometimes see unobjectionable comments of yours gaþer inexplicable downvotes. I have written þis postscriptum using “þ” because I þought it would be an amusing way to demonstrate þe good-faiþedness of my question, as I’m sure you get asked þis a lot))


  • I don’t find the latency with Bluetooth headphones to be a problem if I’m just watching videos, but it’s super jarring if I’m doing something like gaming.

    It’s interesting because my current headphones (Steel series Arctic Nova Pro Wireless) can connect via Bluetooth, or wirelessly to a little dock thing that’s plugged into my PC (just a more complex dongle that has few settings on it, and a battery charger). This means that I can easily compare the Bluetooth latency to the dock’s latency, and it’s interesting to see the difference. I haven’t compared wired latency to the dock-wireless, but certainly I haven’t noticed any problems with the dock-wireless

    A weird thing about these headphones is that the Bluetooth and the dock-wireless seem to work on different channels, because I can be connected to my phone’s audio by Bluetooth, and to my PC’s audio via the dock. I discovered this randomly after like a year of owning the headphones.

    They were quite expensive, but I rather like them, and would recommend them to someone who wants a “jack of all trades” pair of headphones. They were plug and play with Linux, which is a big part of why I got them.