• 3 Posts
  • 798 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • Yeah, the new pipeline is based HEAVILY on object inheritance and method/property calls so there is a paper trail for ALL of it. Also using Abstract Base Classes so future developers are forced to adhere to the architecture. It has to be in Python, but I am also trying to use the type hinting as much as humanly possible to force things into something resembling a typed codebase.


  • I literally told my boss that I was just going to rebuild the entire pipeline from the ground up when I took over the codebase. The legacy code is a massive pile of patchwork spaghetti that takes days just to track down where things are happening because someone, in their infinite wisdom, decided to just pass a dictionary around and add/remove shit from it so there is no actual way to find where or when anything is done.


  • This argument really only holds water if the purpose of film and television ratings were to make commentaries on social moral trends.

    Unfortunately they have an explicit and expressed purpose that is not that. They are a tool which is intended to inform and guide consumers on the content of a product ahead of purchase so they can make an informed decision. They should be locked to a standard which does not change, or all previous ratings should be reevaluated when the standard is changed. The media does not go away. And all ratings should be directly comparable, regardless of: when they were rated, who the “intended” audiences are, or what genres they belong to.

    As a slightly hyperbolic example (pardon the minor straw man), imagine you are a Christo-Facist who, among other things, believes that nudity is a sin and you never want your children exposed to the evils of a bare breast. So you set your TV to only show G or PG movies. Then you find your child watching the 1984 rom-com Splash and boom, tiddies in a fish tank. It is PG because the PG standard allows for brief nudity (https://www.filmratings.com/).

    They don’t apply the standards they have. They routinely make decisions based on backlash from Christo-Facist “Parent’s” groups which means that film ratings increasingly do not reflect the overall moralistic stance of the greater society.






  • The Bhor’s model is at least a useful simplification of the atomic structure. What needs taught is that everything you learn before college and intensive narrow topical courses is simplified to the point of being incorrect with the hope that you get enough of an intrinsic understanding of the concept that the less simplified explanation you get next will make sense. I say this because it will still be simplified to the point of being wrong, but will be a step closer to the truth. This is the essence of education.

    Elementary/middle school: ice is water that has frozen solid HS: ice is water that has lost enough energy that the molecules form a crystalline lattice. College: there are actually 19 or 20 kinds of water ice that have been verified, but as many as 74,963 might exist. Post-collegiate: There may be 74,963 kinds of ice, but I know one ICE we should definitely eliminate from this world.



  • You mentioned the areas being countries. This leads me to believe that they are concave figures, correct? If you are unfamiliar, a concave figure is something that has a space that doubles back into the interior of the shape. So an o is convex, and a c is concave, as an example. Convex shapes are much simpler to find the area of. You can use a Riemann Sum as others have suggested. I would probably just pick a point inside the shape and do a bunch of triangles around with the point as an apex and the bases as two points on the edge of the surface, then sum up the areas of each triangle. You could even probably use a triangulation algorithm built into the engine to do this. (I am unfamiliar with the specifics of the Godot engine).

    For concave shapes it becomes a little more complex. It has been mentioned that you can draw a bounding box around the shape, so that would allow you to calculate it using a numerical method. Take random samples inside the bounding box and count up the number that are inside the shape and divide by the total number of samples. The value you get will be the % of the area of the bounding rectangle that the shape takes up, so just multiply the easy area by the % and you will get an answer that is close enough. It may take a bit to get the sample count right, but it will get there. Try to make sure the samples are as uniform as possible. You could even scatter the sample points then relax them for a couple iterations before counting to increase accuracy without increasing samples.


  • The word copulate has been around since the late 1400’s (before the colonization of North America by Europe) and Old English had the word hǽmed which dates back to the middle ages.

    You are confusing euphemism with language and applying puritanical systemic manipulation to language. That is censorship and it does not mean that the words don’t exist in the language. Whole different can of worms.

    Yes, it has been proven that language and having words to describe things changes the way the brain processes things. There are languages without a word to describe the color Blue, and in fact the people who speak that language struggle to differentiate it from green when tested. Once teaching them a language which includes a word for the color, eventually their brain begins to be able to differentiate it. https://www.bbc.com/future/article/20180419-the-words-that-change-the-colours-we-see




  • Adalast@lemmy.worldtoScience Memes@mander.xyzAwooga
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    Yes, yes, but we live in this timeline so the research that will be done into the mediating factors will 100% become a “breast enlargement therapy” in health spas should it even remotely be found to be repeatable, controllable and safe be damned.


  • I made this comment on a previous post. Vibe Coding is to Coding as Previsualization (Previs is to Visual Effects. (Previs description) A quick slap job that is used to make sure timing is correct, on set assets will all work, and to communicate to artists, directors, producers, and on-set operators what is expected. It is entirely separate from the final product and nothing ever crosses the barrier between Preproduction and Production.






  • I wasn’t attempting to attack what you said, merely pointing out that once you cross the line into philosophy things get really murky really fast.

    You assert that LLMs aren’t taught the rules, but every word is not just a word. The tokenization process includes part of speech tagging, predicate tagging, etc. The ‘rules’ that you are talking about are actually encapsulated in the tokenization process. The way the tokenization process for LLMs, at least as of a few years ago when I read a textbook on building LLMs, is predicated on the rules of the language. Parts of speech, syntax information, word commonality, etc. are all major parts of the ingestion process before training is done. They may not have had a teacher giving them the ‘rules’, but that does not mean it was not included in the training.

    And circling back to the philosophical question of what it means to “learn” or “know” something, you actually exhibited what I was talking about in your response on the math question. Putting to piles of apples on a table and counting them to find the total is a naïve application of the principals of addition to a situation, but it is not describing why addition operates the way it does. That answer does not get discussed until Number Theory in upper division math courses in college. If you have never taken that course or studied Number Theory independently, you do not know ‘why’ adding two numbers together gives you the total, you know ‘that’ adding two numbers together gives you the total, and that is enough for your life.

    Learning, and by extension knowledge, have many forms and processes that certainly do not look the same by comparison. Learning as a child is unrecognizable when compared directly to learning as an adult, especially in our society. Non-sapient animals all learn and have knowledge, but the processes for it are unintelligible to most people, save those who study animal intelligence. So to say the LLM does or does not “know” anything is to assert that their “knowing” or “learning” will be recognizable and intelligible to the lay man. Yes, I know that it is based on statistical mechanics, I studied those in my BS for Applied Mathematics. I know it is selecting the most likely word to follow what has been generated. The thing is, I recognize that I am doing exactly the same process right now, typing this message. I am deciding what sequence of words and tones of language will be approachable and relatable while still conveying the argument I wish to levy. Did I fail? Most certainly. I’m a pedantic neurodivergent piece of shit having a spirited discussion online, I am bound to fail because I know nothing about my audience aside from the prompt to which you gave me to respond. So I pose the question, when behaviors are symmetric, and outcomes are similar, how can an attribute be applied to one but not the other?