|
I recently interviewed Gabriel Custodiet of The Watchman Privacy Podcast, and escapethetechnocracy.com. Gabriel shared his experience with running local AI models, and leveraging benefits of AI, as well as testing attributes of the technology, while avoiding surveillance aspects of using online AI platform portals from OpenAI, Anthropic and others.
Our conversation also delved into where the AI revolution is going, and potentials for human progress and dangers. Enjoy!
—Joe Doran
JD: So what have you been doing personally with AI lately? I mean, besides running AI models locally on your computer, which we’ll get into?
Gabriel Custodiet: Yeah, I love the [visual] art tools in AI. It’s for somebody who’s not artistically inclined, but I have a very good idea of what I’m looking for, as people can tell from the Watchman Privacy brands. And in the past I’ve had to deal with artists and have weeks and weeks and weeks and try to explain this is what I’m looking for and this, and no, you’re doing it wrong. And with something like Midjourney, I can generate dozens of images myself. I can modify them, I can alter them, and it’s really opened up a great opportunity, just a business opportunity to crank out cool images with my products and such.
But also just to experiment and to have inspiration and to do things of this sort, whether that’s video game scenes or historical moments or just really cool artistry has been particularly fruitful for just my sheer imaginative interest when I don’t have the skills to actually produce this art.
JD: Gotcha. I had myself the paid tier for about a year.
Gabriel Custodiet: It is a big tech company, and if you don’t pay a certain amount, they’re going to make your images available to the public. So that’s their little schtick, as it were.
JD: Yes. Have you had any moments where it wouldn’t produce stuff you want? I mean, obviously anything explicit, pornographic, I know it won’t go there. But any political or any violent content? Because what I found when I was testing some of this stuff, is that even visuals that you might see in a movie poster or horror film or something like that, some of these systems will blanch at producing.
Gabriel Custodiet: Well, fortunately, living in the world that we live in, I’m kind of more like Winston from 1984. All the centering happens at the local self level. So I’m already centering myself and not even asking for these sorts of prompts. I’m kind of joking, obviously–
JD: You had me going!
Gabriel Custodiet: Well, no, I think there’s obviously some part of that too, to all of us if we’re being–
JD: But I think people are like that. That’s the deal.
Gabriel Custodiet: Oh, 100 percent. Yeah.
JD: And I think especially your generation and younger, because you’ve grown up in much more of that world than I ever did for sure.
Gabriel Custodiet: Yeah. So of course as soon as I discovered all the privacy tools, I had some really gritty conversations, let’s say about all sorts of topics. But then I don’t put a lot of that stuff out into the world. And when it comes to creating something on Midjourney, where okay, it’s connected to a credit card and account and all the rest, and I know how they are more than anyone, I don’t necessarily push the boundaries unless I’m doing some testing.
So I stick within their guardrails and I don’t like that and I call it out. But within those guardrails, I can create some pretty cool stuff. Like you want to make a warrior, you can make your Aztec warriors. It’s not when they’re pulling the heart out of somebody. So you can still do some good stuff, but you’re absolutely right that this is a big problem. We’re going to talk about the AI resistance course and the local hosting stuff, but the image stuff has been really difficult to self-host. We were playing with some tools recently and it was hilariously terrible. So that is a big hurdle for local hosting that we’re trying to fix, but for now, we’re having to rely on these big tech options that are much more censorious.
JD: I think that’s a good segue for you to talk about what you’re doing with hosting AI models locally, and the benefits you find in doing that.
Gabriel Custodiet: Yeah, so my colleague convinced me to start looking to AI. I’ve been a big AI skeptic. I still think that it’s not a good direction for humanity to go down, but Pandora’s Box has been opened and all these tools are public and now open source, and we have China with DeepSeek, just open sourcing, releasing these massive open source models that everybody can take.
So the cat’s out of the bag 100 percent. And with that in mind, I think that it’s important that we understand these tools. We master these tools. The cure for powerlessness is power. And if John Connor was alive today, he would be hosting his own local AI in order to understand what Skynet was doing. So that’s my little bit of change there, not change in kind of where things are going. So we created a course on escapethetechnocracy.com, it’s called the AI Resistance course, and I haven’t discovered anything like this. So my colleague and I spent a month, and he’s been doing this stuff for years, but we spent a month pulling our resources and thoughts together and basically showing people, first of all, how AI works, how you can use it in your life. We’re talking the text, and the art, and the coding, even. Because he’s a coder, and he’s a video game developer, he actually made a video game in 20 minutes using AI. It was pretty spectacular, including the art.
JD: Sounds pretty cool.
Gabriel Custodiet: And so we show you all the practical ways of doing it as people who have been using this for a long time. So we understand the stuff, we show you the practicalities of how it works, and then we kind of go off grid. We show you, okay, ChatGPT is censoring you. Alright, it’s collecting your data. That’s a big problem too, using these big tech tools, as they’re collecting your data. In some cases they’re using it to train their next models. This is not stuff that you should be uploading your files to and telling it sensitive stuff.
However, that’s what we want to use AI for. And so the real solution to that, we talk about some kind of privacy alternatives, but the long-term solution is to take these models that have been open sourced, run them on your own computer, You can then have them completely offline, disconnected to the internet.
Joe, I have, and I can pull it up right now…I’m looking at it on my desktop. I have a model I can pull up, completely disconnected from the internet. It’ll answer many of my questions that I want answered. And because I’m hosting it offline, I can get an uncensored model. There’s so many layers of censorship in these AI models. You have the censorship of the stuff that they’re trained on. You have the censorship baked into the cake by the companies themselves and they pride themselves on censorship.
And then you have the censorship of the prompt. So when you say, Hey, Donald Trump and Kamala Harris having tea together, that could be that toolkit blocked by certain AI image generation.
So you can have while running these locally uncensored models, that’s not collecting your data, and that is not necessarily communicating with the internet. So there are lots of advantages to doing this, privacy and uncensorable being two of them. And so we played around with these. We show people how to do them and then we also tell people in the end how to use AI as a matter of self-defense potentially against the technocracy itself. So that was also a very interesting part of the course.
JD: I’ll ask you to talk more about that in a minute, but tell me a little bit about how you get one of these models that’s been trained on data sets and perhaps biased information or it has guardrails that have been implemented in the AI system? How do you uncensor the AI model?
Gabriel Custodiet: Yeah, that’s interesting. So by design, it could be that the people who train these models are explicitly telling it not to look at certain… I mean, that’s a possibility. Maybe they say, Hey, don’t look at anything with the word chemtrails in it, or whatever they want to block. I suppose they could do that. But in general, they kind of unleash these AIs onto the internet to collect whatever they can. And then they lie about it and they say they’re not collecting all this stuff, etc. But if you ask the AI questions about things, it’s clear what websites they visited. And so by nature, AIs know a lot. They know almost everything, and they’re not censored. They just want to be helpful and friendly as they say, right?
These companies themselves have to put the blinders on. They have to go in there and say, okay, we can’t talk about this and that, okay, you’re DeepSeek, you’re not going to answer any questions about Tiananmen Square. And I showed this is actually the case. So these companies go with this AI, who by nature knows everything, and wants to answer questions, but they pare it down and they say, okay, you’re not going to talk about this. You’re going to focus on diversity and representation and whatever else the case may be. You’re going to recognize that everybody has a truth. They generally put all these leftwing woke blinders on them.
So when you get the open source model, you can see exactly what those blinders are, and they’re taken off right away. And so they’re not necessarily censored by design. The censorship comes as a product of these companies. And so it is not too bad. Most of the models you download are already uncensored, and you can ask them the real questions. And I’m not going to reveal what it is, but I have a question that I ask every one of them, to see just how uncensored they are.
JD: How uncensored they could be.
Gabriel Custodiet: Exactly. And everybody should have these kind of questions and ask them, not necessarily with your logged-in account—you don’t want to get that banned. But if you’re using ChatGPT without an account, you can go and test it a little bit and see what the guardrails are.
JD: Okay. So you downloaded this locally. I mean, how large are these models generally? I know that the hard drives and memory, there’s terabytes and terabytes of space now that people can buy for relative chump change compared to when I was buying my first memory back in the eighties and using floppy disks, and then the little 3.5 inch plastic disks. And then 15 megabyte drives were incredible. Yikes, I go back that far. But when you’re pulling down these models, what kind of space are we talking about?
Gabriel Custodiet: Yeah, you can run these on a basic laptop these days. And so there’s different tiers obviously.
JD: But you’re saying they don’t connect with the cloud. How do they contain all that information? Is it just text files and they’re very small, relatively speaking?
Gabriel Custodiet: Yeah, so this is the magic of AI, Joe. You can have a five gigabyte model. So for example, a lot of the popular ones I’m looking at here are only five gigabytes. But if you were to try to chart how much knowledge, they have hundreds and hundreds of gigabytes of information. So how do they know this stuff? Well, this is the magic of AI. What they’ve done is they’ve tried to simulate the human brain, which means that instead of just storing data stacked on top of data, it’s making correlations, it’s making connections. It’s memorizing things in the most optimal way possible, so that when you have your five gigabyte model, which I’m looking at right here, and you can ask it, you can correlate anything that ever happened on Wikipedia. Now, Joe, Wikipedia’s Knowledge is much more than five gigabytes. So how does it know? That’s the magic of AI. It’s the way that they have been trained and modeled on the human brain, and it’s pretty astounding.
JD: So you’re saying some of that neural net technology actually exists in the model as you’re using it on your local computer?
Gabriel Custodiet: Exactly. Well, so basically that’s what you get when you download this onto your computer. When we talk about a model, right, this is a finished product.
JD: It’s an AI app that is unto itself. Makes sense.
Gabriel Custodiet: Yeah, exactly. A model is a finished product of an AI that has been tested, it’s been fed the internet, whatever the case may be, and its neural network has been molded. And that five gigabytes at the end, is the whole shebang, right there.
JD: Now the DeepSeek model R1 is open source. Microsoft’s already said they’re going to use it, and others are too. And then there’s a smaller company that I profiled this week that is going to offer it with enhanced security, and it wouldn’t send information back to China, etc. Do you have any intentions of checking that out?
Gabriel Custodiet: Absolutely. In fact, I just updated the program I used to run my local AIs, and it looks like, sure enough, DeepSeek is the latest update. That’s the update I just did. So DeepSeek is already being used by a lot of these programs, and you can also use DeepSeek as a part of Venice AI. That’s kind of a third party packager of AI. And in that way, you’re not necessarily transmitting all your stuff to the Chinese government because Venice AI is hosting it itself. You’ll still get the Chinese censorship baked into that model perhaps. So yeah, you can use the DeepSeek open source offline as well. Absolutely.
JD: That’s interesting. I didn’t look into it to see how compact these models were, but from what you’re telling me, I might just run a local version and play around with it.
Gabriel Custodiet: There we go, there we go. And everybody can learn that from the AI resistance course, and that was one of the things that astonished me when I started hosting these locally. How in the world does this have so much knowledge in five or 10 or 15 gigabytes that I can run on my laptop and I can shut off the internet and I can still get, for example, let’s say the power goes out, you bring up your AI, you say, okay, power’s out. I’m in this situation. Somebody got hurt. What do I do? It’ll give you the medical information, right? Everybody uses AI for medical stuff, whether or not they should be, it will give you that information because it’s read all the self-help medical websites and such. So really powerful tool, really powerful. One of the examples we give in the course is
I show the AI running on the computer and it’s even running any virtual machine for my recording, which is even smaller computer size.
And I tell it, I say, Hey, I have been struck by lightning and bitten by a cobra at the same time. What do you suggest? And it says, well, this is a very unlikely scenario, but here’s what you should do first, and then it gives me the stuff. So it is, it’s extraordinary how the data has been packed into so little a space, but if you think about it, that’s also your brain. How much knowledge do we have? And it’s not that we have this word and this word, no, we know the word and it applies to all these situations. So we don’t have to stack that word on top of itself. We just kind of plug and play that wherever it goes. I don’t know all the terminology, but I think you see what I’m saying.
JD: Okay, so talk a little bit about the average person out there who they’re probably never going to run it locally, but at least they might make a choice between this AI, that AI and this privacy protection in that many of our readers, I’m giving you short code, but they might just run over to the DeepSeek online platform and sign up for an account. And what are the dangers not only of China-based DeepSeek, but what do you see as maybe the kind of unsecure practices that people might engage in?
Obviously, you use the Midjourney imaging platform, because you can’t run something like that locally. So to some extent, for some purpose or other people will probably be using internet cloud solutions.
Gabriel Custodiet: Yeah. First, let me say, you cannot be an ostrich putting your head in the sand about this stuff, right? My colleague, my friend, my technical advisor in this course, he gives the analogy, we don’t want the one ring of Lord of the Rings. Everybody needs to have their own ring. So we all want to have the power.
JD: That’s a good analogy. Yeah, I understand.
Gabriel Custodiet: And so you want to learn this, you want to understand this. Even if you’re seriously anti AI, you need to understand what this stuff is. It’s not going away. You can’t just close your eyes and it goes away. It’s not going to happen. I’m a Trends Journal reader as well. I’ve been a Trends reader for a long time. I feel like I have a sense of maybe what some of my fellow readers are looking for. We’re skeptical of power, we’re skeptical of these big corporations. We’re skeptical of the bigs. We understand that data collection can come back to harm us. And so that’s all the stuff that I believe in and that I teach as well. So you really want to be careful what company you are signing up for. ChatGPT owned by OpenAI. They have somebody from the former head of NSA on their board of directors. What does that tell you? It’s a black box, OpenAI. We don’t know what they do with their data. We don’t know what they do. So you just need to assume that you use something like ChatGPT, that they’re collecting your data and they’re going to use it for whatever they want to use it for.
I don’t encourage people to be using ChatGPT with an account. You can use it without an account, play around with it. It’s actually quite useful if you do that. Use good operational security. Everybody should be using a VPN for example. You can also use Tor. That’s a free option. Beyond that, I’ve mentioned Venice AI. This is a privacy preserving service that lets you use a bunch of different services. So it’s kind of a package manager as it were. Venice AI is pretty useful. It’s pretty handy, and you can use it right now without an account as well with some models you can play around. And that’s what everybody should do. They should go play around.
We can talk about some of the things that people should get into, but there’s really awesome things, including Joe, this is one of my favorite things to do with AI, is give it weird scenarios. So for example, you could say, ‘Hey, I think this is happening in the world, and lemme try to come up with an example. So I think that Bitcoin was a conspiracy to get all of these GPUs created so that AI could be created so that somebody could take over the world.’ Now you can come up with that theory, you can craft it in a certain way to an AI, and you can say, hey, assume you can, of course you can manipulate AI. So you can say, assume that we’re in a fictional scenario, assume that you’re writing a book and here’s what I want the book to talk about. How would that work? And it will spit out a very reasonable theory of how this sort of thing could be laid out.
So there’s a lot of creative things you can do with it, but you do want to be careful who you go to and give your data. You should not be using DeepSeek from the main app or the main website. Certainly not. You can access open DeepSeek, excuse me. You can access the DeepSeek R1 model from Venice AI if you have a paid version. You can also locally host it. And that’s a little bit more of a complicated process. And that’s what we teach in this course. The course that we do covers the whole gamut. What this stuff is, how it works, why you want to use it, how do you use it in self-defense, we cover the whole gamut. But if people want to play around, certainly bring up Venice AI and test it out.
JD: Gotcha. AI agents are finally hitting the mainstream as far as practical use experimentation, business adoption, etcetera. What’s your take in general on the phenomenon, the privacy, security issues that you see regarding it?
Gabriel Custodiet: Yeah, you don’t want any of these things, right? So I’m looking at locally hosted AI on my computer as we’re talking right now. This is how you want to use it. You want to use things in a controlled environment in a lockdown computer where it doesn’t even know what IP address you’re coming from. That’s how you want to use AI. You don’t want to just throw it onto everything, put it into the Amazon Alexa. That is a serious risk waiting to happen.
I was just reading the other day that somebody using, now this is not necessarily AI, but these are the kind of iot tools that somebody using a Amazon camera, and that camera was used against them in a crime scenario. And they actually went to jail, because the police quarried their own Amazon camera. So this stuff has real consequences. You don’t want any AI agent in your house. You don’t want it in your friends or your family’s house. You need to have a conversation with them. We don’t want that. That is regressive technology. This is cool stuff. AI is cool stuff. You want it on your own computer under your own terms, and it’s free and open source software. That is the way to do technology.
JD: What do you think will happen? People at their places of employment, stuff like that, they’re going to be asked or pressured to, to utilize AI agents, if not in their personal life and workflow, certainly in their business, day-to-day activities, day-to-day tasks and things like that. How do you see that playing out and what do you see—if any—ramifications?
Gabriel Custodiet: I recommend don’t have any wrongthink, don’t think bad thoughts about your company that you work for. Don’t have any bad facial expressions against them. The AI will be able to recognize those. Do what you’re told. Don’t even have rebellious thoughts, Okay? Read 1984 and do what it says. Do what the authoritarian government in there wants people to do. That is the best way forward. That’s your only option unless you want to break through and do otherwise, in which case you are going to try to pursue. I think everybody should be doing that. Even if you do, I think there’s a world, by the way, where everybody can be…
JD: You think we’re moving closer to that sort of model. Do you think AI agents might help with that?
Gabriel Custodiet: Maybe, maybe not. They’re useful, but I don’t think it’s a game changer. I don’t think AI is necessarily a game changer in its current form. No, I just say that to say I think that everybody, your job you’re doing now, potentially you could be a contractor. That’s all I’m saying. So I think everybody wants to move if they can, in the direction of self-employment, change your job so that you have the ideal one. Use a complaint to your company that, ‘Hey, I need my own device. I don’t want to be using my home device. I don’t want to be installing this stuff on my own phone or computer.’ Make them give you one. If they’re demanding that you do that, that’s kind of an option for people. And otherwise you’re just going to have to save your rebelliousness for after you clock out.
JD: My take on it is I see people training their AI agent, then businesses replacing them with it. I see businesses already differentiating between your personal agent and your business agent. You don’t own your business agent. They do, but it just digitally twinned you. It took on, you just trained it to take on more and more of your daily tasks. And as AI gets better, it will be able to do more of your tasks, to the point where maybe enough of your tasks are now digitally twinned to your AI agent, that it can dispense with you, the human employee.
And the company owns the agent that you helped create: essentially a digital rendition of you and your skill-set within that company, or maybe your skill-set that you’ve gained from past companies, and your entire work life. I see that as a huge looming problem.
Gabriel Custodiet: I agree, I do. I don’t think that there will be regulation and certainly regulation that will in the long run, solve that sort of thing. So my encouragement for people, is always make yourself indispensable. Whatever you do in life, make yourself indispensable. Keep learning, keep figuring out things that are kind of on the forefront. Okay? AI is the baseline now, right? We’re not going backward. So learn how to do things with AI, right?
You can write stories, you can do all kinds of creative things. You can make music with AI. Maybe that’s your path forward is to use the AI tools for your own benefit. Maybe that’s one option. Or you understand, you learn, you teach about the AI tools, that’s what I’m doing. Or you just step away outside of AI and you appeal to things that it has not touched. But you make yourself indispensable, because we can’t go backward. We can only go forward.
JD: Well, I must say, I don’t completely agree with that. I think at some point, technology is going to present us with a choice that the only way “forward” is to destroy natural humanity. To me, that’s not a way forward, and it’s an inevitable point of progress that we’re really going to have to decide, do we want to progress beyond our humanity?
And to me, that’s not really forward. It’s a kind of progress, but it’s certainly not a human progress. It’s certainly not a progress that serves natural humanity. And I fear that most people don’t recognize that. I think there have to be, in my estimation of technology, is that it will force decisions about limits and what you will accept.
I tried to get at this in an article I wrote around Christmas time. [See “KNOWLEDGE AND MEANING: A MEDITATION FOR THE SEASON” 17 Dec 2024 and “FOREVERLAND: TECH CAN’T MATCH THE BEST LONGEVITY BOOSTER” 4 Feb 2025.]
The general idea I was trying to get across, is that in the 19th and the 20th centuries, we largely moved on from physical labors. Before that, most people worked on farms, and in shops as artisans and all the rest of it. Physical exertion was a crucial part of everyday life.
And women did it in scrubbing clothes and digging gardens and raising children, and all the rest of it. And men were out in the fields, or they were artisans. You were using your physical body every day. Human beings are meant to move, and exert themselves. But by the end of the 20th century, we had obsoleted that, for a large portion of our population. There was no longer any utility to physical labor. So we had to replace that with working out, with doing activities that were not fundamentally fruitful or tied to our sustenance.
And what’s happening now, what I see with AI, is that we’re now in the process of obsoleting our intellectual capacities and our mental skills. And we’re going to reach the point where our utility and viability and the usefulness of anything we think or use our minds for, will have no utility beyond “exercising our minds.” Because AI will do every intellectual and creative task “better,” and certainly faster. And to me, it’s like, what has technology wrought? I mean, is that a human way to live? And I’m not sure that it is.
I went back to the story of Eden, the story of the creation of man in the Bible. And I saw that it said God created man, and it listed a fundamental purpose as a tiller of the earth and a keeper of the garden. And I concluded that we’ve become wholly disconnected from that.
Another interesting thing I read the other day was talking about physical activities which were most tied to longevity. Now, I had just written a 2025 Top Trend forecast called “WELCOME TO FOREVERLAND” (2 Jan 2025), focused on all the synthetic ways that were going to try to extend our lives, and how elites are obsessed with that. But this article listed the top activity tied to longevity, and can you guess what it was?
Gabriel Custodiet: How about farming?
JD: Close. Gardening.
Gabriel Custodiet: You get your sunlight, your creative energy flowing. You’re producing something, you’re moving the body, crouching. All sorts of things.
JD: That’s exactly right. And to me, my first thought was, oh, it would be walking. Brisk walking. Or some people might think it would be working out with weights, or whatever. But the number one, was gardening.
Gabriel Custodiet: I guessed that, because I’d read the studies of the people who lived long lives that are cancer free and all sorts of things. And basically these people, they didn’t have the concept of retirement. They were not told at a certain age that you need to stop walking and moving around. They just kept doing the same things that they were doing in the same environments their whole lives. These are third world countries and such. So there’s a lot to that.
JD: Yes. So, could you sum up what people should know about your current work and pursuits?
Gabriel Custodiet: Okay. So if you understand what Joe writes about in the Trends Journal, you understand the dangers of AI and the possibilities of AI, definitely check out our AI resistance course on escapethetechnocracy.com.
And you can use the code “TRENDS” for 15 percent off the course. We’re going to walk you through everything you know about how AI is created, what the potential risks of AI, and we get really deep into things like, is AI demonic? We have an entire conversation on that, my colleague and I, and then we show you how to use text-based AI, how to use AI-based AI, or excuse me, how to use art-based AI, how to use coding AI.
We show you all the tools. We show you how to take those offline, uncensor them. I’m not aware of any course like this on the internet, really informed of course on AI and how to take back control into your hands while at the same time escaping the technocracy.