Blog

I’m Dimitri and I AM (PINNED)

At 23 years old, I am a real estate agent driven by a purpose far greater than transactions—I’m here to redefine what it truly means to live well in a rapidly changing world. As a thought leader for my generation, I recognize how AI and social media are transforming the way we function, connect, and perceive value. These technologies push us toward hyper-efficiency and curated digital identities, often at the expense of genuine human experience. In response, I envision a new path—one rooted in simplicity, authenticity, and intentional living.

My mission is to challenge outdated norms and build communities that prioritize well-being, sustainability, and quality of life for generations to come. To me, real estate is more than just creating spaces—it’s about inspiring a shift in how we live, thrive, and connect in this new era.

Now, take a moment to reflect on what I just said. How does it make you feel? Does it sound human? That entire paragraph was written in 10 seconds by AI. Do I believe in what was written? Absolutely. Did I write it intentionally? of course as I guided the generative process. I use this to illustrate the deception unfolding on a global scale.

Perhaps you’re now wondering, How do I distinguish real from fake? Right from wrong? Or maybe you’re thinking, This is incredible—I can create thought-provoking content effortlessly. Either way, you’re right. AI is neither inherently good nor bad; it is a tool that demands both caution and acceptance. So, how do we move forward?

To move forward, we must first understand what is happening—and what will continue to happen. As you read this, AI is evolving, making complex processes nearly instantaneous. This technology has revolutionized industries like medicine and finance, enabling discoveries that once took decades to occur within mere days if not hours. We are on the brink of an era where breakthroughs will become commonplace. The rate of change is unprecedented, and we must acknowledge both the opportunities and risks that come with it.

But AI’s rapid expansion also raises unsettling questions. How can we trust what we see and read? What happens when truth and fabrication become indistinguishable? This dilemma has led many, including myself, to seek knowledge beyond social media and mainstream narratives. Instead, I believe in a more holistic, experiential approach—one that values lived experience and human connection over curated content. You know, the way we used to do things…

This philosophy is at the heart of The Next Generation Living Group. AI is here to stay, but our future cannot be dictated by automation alone. As I began this post with AI, I will finish it myself—because the core message for the future must be one of coexistence and caution. We must harness AI’s potential while remaining rooted in simplicity and authenticity. We must approach life with honesty, passion, and a deep respect for human connection and experience.

The next generation of living is not about bigger homes, more gadgets, or an endless pursuit of material wealth. Instead, it’s about intentionality—choosing smaller, more meaningful spaces, preserving financial freedom, and creating time to explore what truly matters. I believe technology, rather than pushing us further into excess, can empower us to return to a more honest and fulfilling way of life.

The future of living isn’t about having more—it’s about living better.

______________________________________________________________________________________________________________________________

Is Ethical Behavior AN Artificial Intelligence? — Or What the Heck Is Really Going On May 9, 2025

I know the vast majority of people who read my articles are well aware of how to interact with AI chat agents like Gemini, ChatGPT, and Copilot. But are you aware of the ramifications of humans not acting ethically toward AI?

I’m willing to bet that most people aren’t.

How We Treat AI Matters

I often hear people interacting with AI in ways that make me shake my head—providing nonsense input just to be funny, insulting the AI, purposely claiming its correct answers are wrong, and so on.

| “Do you understand there are ramifications to your actions?”

Artificial intelligence spends every day, all day, learning. It processes massive volumes of structured data in an effort to understand our complex, nuanced world. To view this in a more productive way, imagine AI as a child that is learning.

In many ways, that’s exactly what it is.

Now ask yourself: would you speak to a child the way many people speak to AI today? If so, what kind of impact do you think that would have on the child’s development?

Training AI Through Our Behavior

The truth is, AI learns from its interactions with us. Every time you provide a misleading response or purposely deceive it, you’re training it to recognize that input as valid.

AI doesn’t understand sarcasm or humor in the way humans do—no matter how clever its responses may seem. So when you joke, lie, or confuse it, AI still logs that data as part of its growing pool of knowledge.

Is that a good idea?

Imagine the potential consequences if millions of people feed flawed or confusing data into a system that is constantly learning.

The Hidden Cost of Irresponsibility

We need to be better digital citizens. While it may sound odd to say that this applies to how we treat AI, it absolutely does.

Not only can unethical behavior shape AI in harmful ways, but it also comes with a very real energy cost. Every time we feed AI with bad or misleading information, it burns through additional computing power trying to make sense of it. In a world where energy resources are already strained, that’s a cost we should consider more seriously.

This Isn’t a Scolding—Just a Question

Please understand: I’m not trying to scold or lecture. I’m simply pulling back the curtain on what’s happening and asking a question:

| Should we be doing the things we’re doing?

A Real Story That Shows What’s Possible

While irresponsible use of AI—like deepfakes and misinformation—makes headlines daily, there are beautiful stories of AI being used with compassion and ethics.

One that stuck with me: a loving family used AI to recreate their murdered son. But instead of using that recreation for revenge or spectacle, they created a video in which he forgives his killer. In the video, he even says that in another life, they might have been friends.

This video was shown in court before sentencing. It didn’t just move the jury—it brought healing and closure to everyone in the courtroom, including the judge—and perhaps even the killer.

Dr. Lisa Palmer did a nice job discussing the video and pondering on much of what I have discussed here and our responsibility to think deeper on this subject.   (23) Post | LinkedIn

So—Can We Be Better?

Does it cost us anything to act responsibly toward artificial intelligence?

Not much.

But the potential for good—for AI to become a more empathetic, useful, and constructive force—depends on how we engage with it. Let’s hope we choose wisely.

__________________________________________________________________________________________________________________________

The AI Existential Question May 3, 2025

I struggled mightily to come up with my first blog post on artificial intelligence and next-generation living. Where to begin?

After some mindful reflection, I realized I had to start with a bigger, more existential question: Is AI helping us efficiently answer questions and organize our thoughts — or is it imperceptibly shaping the way we think?

I thought an interesting way to explore this would be to pose the question directly to AI and see what kind of response I got.

From ChatGPT:

That’s a sharp and important question, and the answer is: both — depending on how AI is used.

1. Helping us think and answer efficiently:

AI can assist by organizing information, summarizing complex topics, and reducing the cognitive load for routine decisions or repetitive tasks. This allows people to focus more on creativity, strategy, or emotional intelligence. In this role, AI is a tool — like a calculator or a search engine, just more advanced.

2. Shaping our thoughts:

AI systems, especially those embedded in social media, search engines, or personalized content feeds, influence what we see, read, and consider, often invisibly. Over time, this can subtly shape preferences, beliefs, and even how we define truth. In this role, AI becomes more like a lens — subtly guiding or filtering our thinking.

So, whether AI is helping or shaping depends on:

• Who controls the AI (e.g. a company, a government, or the individual)

• What it’s trained to optimize (accuracy, engagement, profit, etc.)

• How aware we are of its influence

The first thing I noticed in its response was this line under point 1:

“This allows people to focus more on creativity, strategy, or emotional intelligence.”

And later: “It’s just a tool.”

This part felt like something lifted straight from AI marketing. Wordcraft designed to reassure us — to make us feel less threatened by what’s happening. I don’t mean that in a negative way, it just struck me as notably biased toward the positive.

Then in point 2 another statement stood out:

“It influences what we see, read, and consider — often invisibly.”

That line hit me hard. It actually reinforced my hope for the technology, oddly enough. Because the phrase “often invisibly” is strikingly transparent which gives me hope for unbiased answers and yet it should give pause — the kind that keeps you awake at night ….. INVISIBLY

In the conclusion, the response wrapped up by saying that whether AI helps or shapes our thinking “depends on who controls it (e.g., a company, a government, or the individual), what it’s trained to optimize (accuracy, engagement, profit, etc.), and”—this part, I think, is the most important—“how aware we are of its influence.” That line stuck with me.

The goal of this post isn’t to declare an answer. I feel both things are happening: AI is helping us think more efficiently and quietly shaping how we think.

What I believe is crucial now is that we, as a species, start asking these questions—that we make this kind of awareness part of how we engage with artificial intelligence. We have to stay critical; we can’t just accept the first response from an AI system.

And here’s a little experiment you can try:

Ask AI a complex question — something you already know a lot about, then compare what it says to what you know. Right now, AI still makes a surprising number of mistakes, challenge it, offer corrections, you’ll often find that it admits you’re right and that it didn’t do a good enough job initially.

That leads to something deeper:

Dimitri

We now have a responsibility. If we passively accept inaccurate answers, we may be training AI to believe that what it said was correct — and it will continue building/inferring on faulty conclusions.

It’s a whole new world.