It’s the physical grain of the canvas.
It’s the physical grain of the canvas.
An LVT discourages searching for new uses of land (eg prospecting for oil)
One, I am happy for my government to discourage fossil fuel mining and tax it heavily, but this “tax disincentivises…” is pretty bogus because the value of the land won’t exceed the profits you make from it, and taxing corporations who make loss of money from natural resources that they didn’t create is good, not bad.
An LVT discourages developers from building infrastructure or developing nearby plots
Which developers ever built non essential Infrastructure?
The really really simple solution to high Land Value Tax from owning the land under large developments is to sell the land along with the housing or other buildings you built; don’t become a landlord. This is THE WHOLE POINT of a land value tax. Landlords use their land ownership to extract an income using an asset they refuse to sell outright from someone that needs it. House prices fall because they can’t be used to extract an income, and they become worth what they are, rather than worth what value can be extracted for no benefit from tennants. This is an advantage, not disadvantage.
The article goes on to engage in more speculative FUD but I’ve run out of energy for this gish gallop.
The fifth Doctor knows who the Portreeve of Castrovalva really is.
Yes and no. Beliefs can definitely shape reality.
If someone believes that they can’t do something difficult, they often don’t attempt it, so don’t acquire the skills they would need, and stay unable to do it. The converse is also true.
Children are heavily influenced by their parents’ beliefs about them.
Believing something about different brands of soda doesn’t change the chemical composition of them, but in a world where products are judged on their sales rather than their chemical composition, changing the perception of a product can fundamentally change its sales, making it a better product by the only objective measure that’s consistently used. This is even more true in the world of fashion, for example very strongly with trainers etc.
Anything where human behaviour changes reality is a place where beliefs change reality.
Our beliefs shape the world strongly and powerfully. They change reality.
(Personally and irrelevantly to your question, think it’s weird to shave your pubes, and I think that based on who started that trend, why they started it and why it became popular, but people younger than me, who don’t remember any different disagree strongly.)
But the fact that your son trusts you with that question and that you calmly helped him and didn’t make a big deal out of it, is an absolute parenting win. Who does your teenaged son go to when he’s worried about something personal and sensitive and embarrassing? He goes to you, and you help him and he is right to trust you.
You are doing excellently as a dad.
This graph is really, really wrong. Properly messed up.
This is you
I already told you my experience of the crapness of LLMs and even explained why I can’t share the prompt etc. You clearly weren’t listening or are incapable of taking in information.
There’s also all the testing done by the people talked about in the article we’re discussing which you’re also irrationally dismissing.
You have extreme confirmation bias.
Everything you hear that disagrees with your absurd faith in the accuracy of the extreme blagging of LLMs gets dismissed for any excuse you can come up with.
You’re so insightful and wise. You have learned much from other viewpoints.
It’s like you didn’t listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It’s utterly irrational. You have to be trolling me now.
… because this sign was made before the IBM PC was invented.
Language changes over time.
I think you’re missing sarcasm for insanity, and the reason that you’re doing that is that you were already belittling their viewpoint quite fiercely, rejecting absolutely everything they said just because you disagree with their conclusion.
it’s so good at parsing text and documents, summarizing
No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn’t matter, go ahead and use LLMs.
If you just want some ideas that you’re going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you’re using LLMs effectively.
But if you’re trusting it, you’re doing it very, very wrong and you’re going to get humiliated because other people are going to catch you out in repeating an LLM’s bullshit.
I have a similar story, but it got worse and worse with the lies as it got through the table. I fought it for an hour, then I wrote a script instead.
You’re better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.
It’s a born bullshitter. It knows a little about a lot, but it has no clue what’s real and what’s made up, or it doesn’t care.
If you want some text quickly, that sounds right, but you genuinely don’t care whether it is right at all, go for it, use an LLM. It’ll be great at that.
I would be in breach of contract to tell you the details. How about you just stop trying to blame me for the clear and obvious lies that the LLM churned out and start believing that LLMs ARE are strikingly fallible, because, buddy, you have your head so far in the sand on this issue it’s weird.
The solution to the problem was to realise that an LLM cannot be trusted for accuracy even if the first few results are completely accurate, the bullshit well creep in. Don’t trust the LLM. Check every fucking thing.
In the end I wrote a quick script that broke the input up on tab characters and wrote the sentence. That’s how formulaic it was. I regretted deeply trying to get an LLM to use data.
The frustrating thing is that it is clearly capable of doing the task some of the time, but drifting off into FANTASY is its strong suit, and it doesn’t matter how firmly or how often you ask it to be accurate or use the input carefully. It’s going to lie to you before long. It’s an LLM. Bullshitting is what it does. Get it to do ONE THING only, then check the fuck out of its answer. Don’t trust it to tell you the truth any more than you would trust Donald J Trump to.
Yep. UK.
Shit.
That is stunningly beautiful. Wow.