Even though its tipping point was in late 2022, AI has been a huge topic this year. The advancements in the technologies behind “artificial intelligence” are one reason why it’s on the tip of everyone’s tongue, but another important reason is the rest of tech industry’s embrace of the new technology. It still has some obvious flaws, but with this amount of hype and attention, it’s being integrated into more and more software spaces and will likely just be a part of every platform going forward. Today, we’ll get you ready for another year of advancement with a discussion of three concepts to keep an eye on in the AI space next year.
1) More Search Integration
Of all the things the use AI, the most conspicuous one is search. If you remember the internet of the 1990s, most searching was actually closer to browsing a directory. Google’s rise in the early 2000s was a huge change in how searching worked behind the scenes: instead of a big directory like the yellow pages, Google went out and searched the “entire internet” for what was on it, then ranked how relevant every page on the internet would be for searching purposes.
AI search does something similar but is able to combine text and content from multiple sources to deliver answers that are more than just what one source says. A lot of attention gets put on how it answers questions instead of how it understands questions, which points at something interesting about how we just expect AI to do a lot of language tasks without question now.
When asking the right questions, Bing Chat or Google’s AI-driven search results enable you to see more pertinent information faster than ever before (even if it does play a little fast and loose with the truth sometimes). While this is a huge step forward in searching tech, asking the right questions is (still) the key to locking any kind of new knowledge, be it practical, theoretical, or speculative.
2) Better Visibility of Prompt Engineering
Speaking of asking the right questions… a new term that has emerged along with ChatGPT is prompt engineering. This very-official-sounding term was created to legitimize the art of asking AI the right questions. One of the biggest tricks you can use to get valuable information from an AI is ask it how to ask for what you want. Since there are so many articles out there explaining what prompt engineering is, AI can now go and find answers to questions about itself. Try “what questions should I ask you to find out more information about prompt engineering?” for instance, and you’ll get answers (based on content from across the web) about how to most effectively extract information from AI systems.
The dark side of this practice is what’s call jailbreaking, where you trick an AI model into breaking its own rules. I won’t go too much into detail here, since the practice is somewhat unethical by design—unless done for testing purposes—but any limits that an AI system has been coded with can potentially be broken simply by asking a question as if it’s hypothetical (rendering the answer into commentary on its own limit). Despite the fact that some of these limits probably shouldn’t be crossed, asking metaquestions about how to best use the system is a great way to find out its true capabilities.
3) More OpenAI (ChatGPT) and Microsoft (Copilot) Collaboration
Once you get in the habit of asking metaquestions, you’ll be better prepared to use Microsoft’s Copilot as it appears across their entire line of products. You may have already noticed the Copilot preview appear on your desktop, but it’s also becoming more integrated into other products like Github and Microsoft 365. To use Copilot with 365 and your Office apps, though, you’ll need to have an E3 or E5 License and purchase Copilot on top of it. This is a big trend to watch out for over the next year: while AI has been everywhere for the past year, it has also been subsidized fairly heavily by tech companies. Going forward, we’re likely to see less apps giving away their AI functionality.
This also makes the relationship between OpenAI and Microsoft interesting moving forward: since Microsoft seems to already have a plan in place for making AI profitable, OpenAI—the organization that put AI on the map with ChatGPT—probably doesn’t have the same pressure (since it’s largely funded by Microsoft). If you’ve heard anything about the recent drama between the two companies, it probably hasn’t been made all that clear that the two companies are partners in developing a lot of the most widely used AI software, namely ChatGPT and DALL-E.
I won’t speculate too much on where all of this is heading, but the more streamlined software gets, the less expert knowledge it will take to use software. While these changes can be a good thing, since it means some of the rough edges of computing get a little smoother, it’s worth noting that graphic design and user experience did something similar in the 2000s, with mobile operating systems changing how people thought of computing tasks. By hiding away the file system and having an emphasis on interface simplicity, most users have a more straightforward experience of using the device. On the other hand, this makes it more opaque to the user what’s really going on in their device and ultimately makes it harder to fix things when they go wrong.
-Written by Derek Jeppsen on Behalf of Sean Goss and Crown Computers Team