Or a few more gb of LLM?
Or a few more gb of LLM?
You fucking imbecile. If women are locked in the bedroom how can they make dinner?? Moran
Banks hate this simple trick
Blowing up? Seen conservative discussion areas? Their godking is not only one of the working class now, but he owned all the democrats and made Harris look like a fool. He’s a master troll doing 4d chess!
They built a space laser?
It’s a watch that says you have no taste.
They know their target demographic
Most phones these days use randomized MACs
https://www.guidingtech.com/what-is-mac-randomization-and-how-to-use-it-on-your-devices/
Not sure if that is for BT too, but looks like there is some support for it in the standards
https://novelbits.io/how-to-protect-the-privacy-of-your-bluetooth-low-energy-device/
https://novelbits.io/bluetooth-address-privacy-ble/
The recommendation per the Bluetooth specification is to have it change every 15 minutes (this is evident in all iOS devices).
So seems like it is implemented on some phones at least
https://www.bluetooth.com/blog/bluetooth-technology-protecting-your-privacy/
From 2015. So this seems to be a solved problem for a decade now
No, all sizes of llama 3.1 should be able to handle the same size context. The difference would be in the “smarts” of the model. Bigger models are better at reading between the lines and higher level understanding and reasoning.
Wow, that’s an old model. Great that it works for you, but have you tried some more modern ones? They’re generally considered a lot more capable at the same size
Increase context length, probably enable flash attention in ollama too. Llama3.1 support up to 128k context length, for example. That’s in tokens and a token is on average a bit under 4 letters.
Note that higher context length requires more ram and it’s slower, so you ideally want to find a sweet spot for your use and hardware. Flash attention makes this more efficient
Oh, and the model needs to have been trained at larger contexts, otherwise it tends to handle it poorly. So you should check what max length the model you want to use was trained to handle
Like when under Arab spring the Egyptian politicians tried to get the military involved to stop the protests, and got back (paraphrased)
“Our primary job is to protect the Egyptian people from violence. You really don’t want us involved in this”
Sounds a bit like worldwar series by Harry Turtledove
If I go to a restaurant and order risotto, I haven’t made the dish, I’ve only consumed it. I want you to focus on that word “consume”, it’s important here.
If I buy a bread at the bakery, ham and cheese in the grocery store, and make me a sandwich, who’s the creator?
Hmm… what about pendulum painting? Where you put paint in a bucket, put a hole in it, and let it swing back and forth over the canvas?
On one side he chooses paint and size of hole and initial path and so on, but on the other hand he let nature and physics do the actual painting for him.
AI can be art. And you’re like the people criticizing the first photographers saying what they did wasn’t art. This is what I think.
And it’s going to have to be okay.
And woman a combatant factory?
I still use http a lot for internal stuff running in my own network. There’s no spying there… I hope … And ssl for local network only services is a total pita.
So I really hope browsers won’t adapt https only
But even if you use GoMommy extra super duper triple snake oil security checked ssl cert, if I trick LetsEncrypt to sign a key for that domain I still have a valid cert for your site.
They want to force people to be hetero, Christian, and either white male and upper class, or anything else and subservient.
So for them, other groups forcing/ brainwashing people to be gay/trans, atheist and so on makes perfect sense.
Hell, why not take a car and just plow through it? Those fuckers are flagbearing an ideology that developed mass murder on an industrial scale. Let them feel a tiny bit of that on themselves.