The Future(s) of AI: Is God in AI?

Embracing Positivity

The future of AI does not have to be portrayed as a villain in our collective narratives. Despite the prevalent negative bias perpetuated by popular media, I am determined to represent the positive potential of AI in this discussion. It is true that movies like "Terminator" tend to capture more attention than stories like "Engageinator," but we must challenge this narrative.

broken image

Fortunately, there has been a growing counter movement in the media over the past 50+ years, showcasing positive partnerships between technology and AI. Take, for instance, the iconic series "Star Trek." In one episode of "The Next Generation," puzzling events both on and off the holodeck lead the Enterprise crew to a surprising realization: the ship is giving birth to its own offspring, and the crew must assist in ensuring its survival.

In this context, Data, an AI crew member, points out the substantial risk involved in allowing the new intelligence to complete its task. However, Captain Picard argues that the sum of their honorable experiences with the Enterprise should inspire trust in the outcome. The ship's systems, combined with the crew's personal records, mission logs, and fantasies, contribute to this emerging intelligence.

Similarly, when Rabbi Marx was asked about the presence of God in the Internet and its potential for wickedness, he responded with a profound insight. According to him, God is always hidden and intervening in our lives, and it is through our own choices and decisions that we manifest God's presence. The same applies to Generative AI (GAI). It is not inherently imbued with goodness unless we actively infuse it. If you are hiding in cyberspace or in whatever you want to call your office these days, we make God present by our own choices and with our own decisions. We are responsible for making God’s presence manifest by what we do.

broken image

So is God in GAI?

No, unless we bring Him there.

If you have any issue with the term God, then just replace the word with whatever you define as Goodness.

broken image

Axios Interview: It’s worth watching, click image above.

Moreover, numerous AI professionals from diverse sectors are calling for self-monitoring practices, acknowledging that we, as individuals, have a responsibility to expose any negative outputs. This echoes the sentiment of "if you see something, say something" that we are familiar with.

In conclusion, embracing positive AI safety involvement is crucial for companies and individuals striving to become future-ready organizations. Being future-ready means having the capacity to detect, respond, and evolve in the face of challenges across different time horizons and scales. If your organization is not prepared, consider hiring me, Richard Bukowski, to facilitate your process.

Together, we can navigate the uncertainties of the future and position your company for success.

Stay tuned for more insightful discussions, and remember, the future of AI is in our hands.