The concept of creating machines that think and act like humans has been around since ancient times—like the story of Talos, a giant bronze robot from Greek mythology. In the story, the god Hephaestus created Talos to protect the island of Crete. This mechanical figure could patrol the island’s shores and throw rocks at any ships trying to invade. Talos is one of the earliest examples of a being designed to do a specific job—just like modern AI systems which are built to help us with different tasks.

After AI as we know it began to take shape in the mid-20th century, it become an integral part of our daily lives, often working behind the scenes in ways we may not always notice. Virtual assistants like Siri and Alexa help us manage tasks, set reminders, and answer questions using AI’s ability to understand and process natural language. Media platforms like Netflix and Spotify use AI to analyze our preferences and tailor content to our tastes. Facial recognition technology uses AI to let us unlock our smartphones or tag friends in photos. AI’s ability to understand and generate human language powers chatbots that handle customer service inquiries or provide support on websites, making interactions faster and more convenient. It is already everywhere.

Ideas within ideas

You may hear a few seemingly interchangeable terms thrown around: artificial intelligence, machine learning, deep learning, neural networks. You can think of these terms almost as nesting dolls, with each subsequent term contained within the previous one.

Artificial intelligence is an umbrella term for machines which problem-solving and communicate like humans. Machine learning is a subset of AI where machines ‘learn’ from data to make decisions or predictions without being explicitly programmed. Classic machine learning often requires some human interaction to sort the data used before the machines can effectively learn from it.


Deep learning is a specialized subset of machine. It uses a many-layered mathematical technique called neural networks to simulate the processing power of a human brain and is particularly good at recognizing patterns in large amounts of data, like images or speech.

While classic machine learning also uses neural networks, they tend to be quite shallow, using only one or two computational layers. Deep learning, as the name suggests, uses neural networks with hundreds or thousands of layers, enabling it to engage in complex
problem-solving, like automating tasks without human involvement. Deep learning is more autonomous and can handle large amounts of raw data but also requires far more data points and computational power than classic machine learning.

When computers talk back


Some of the most common public interactions with AI include language or images, like a question-answering chatbot or an image generator. These functions utilize subfields of machine learning called Natural Language Processing (NLP) and Computer Vision to process data from language and images.


As the name suggests, Natural Language Processing focuses on the interaction between computers and humans through language. Initially created in the 1950s to assist with translation, NLP enables machines to understand, interpret, and respond to human language. Large language models (LLM) like ChatGPT depend on NLP to both analyze language input and create legible output. Computer Vision focuses on interpretation of, and decision-making based on visual information. It’s through computer vision that AI can recognize objects, faces, or even diagnose medical conditions from images. But NLP and Computer Vision alone can’t create a chatbot or a brand-new image from a prompt..


Originally, language models relied heavily on Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks. These models were designed to process text sequentially, meaning they read and generated words one at a time, maintaining a memory of previous words. While this approach worked reasonably well, it had significant limitations: The models often “forgot” important context from earlier parts of a sentence or paragraph. They were slow and computationally expensive because they processed text word by word, making it difficult to scale to larger datasets.

Their sequential nature made it difficult to parallelize computations by breaking down each task into subtasks, which can then be simultaneously executed, further limiting their performance on large-scale tasks.

The introduction of the transformer model in 2017, through the paper “Attention is All You Need” by Vaswani et al., marked a turning point in the development of LLMs. The transformer model addressed many of the shortcomings of the existing models by using an attention mechanism that allowed it to process entire sequences of text at once, rather than word by word. A year later, OpenAI began introducing a series of Generative Pretrained Transformers (GPT) capable of generating text. These transformers were pre-trained on a massive dataset containing a wide variety of texts from the internet, including books, articles, and websites.

While transformers and LLMs have made significant strides, challenges remain in ensuring that the models are accurate, up-to-date, and capable of handling more complex tasks. This is where innovations like Retrieval-Augmented Generation (RAG) and Multi-Agent Systems come into play. One limitation to LLMs is that, once trained, they rely solely on the data they were fed during training. This means they can sometimes provide outdated or incorrect information. RAG addresses this by combining the generative capabilities of LLMs with the ability to retrieve relevant information from external sources like databases or the web in real-time. LLMs are also limited when asked to perform tasks that require attention to multiple sources: it can start to “hallucinate,” giving made-up or incorrect answers.


This is where the multi-agent system approach is interesting. The idea is to use multiple AI agents to work together to achieve a goal. Each agent in a multiagent system can specialize in different tasks, such as decision-making, problem-solving, or perception. By collaborating, these agents can tackle more complex tasks than any single agent could handle alone.


AI and automation in the AEC industry


The AEC Industry is still a relative newcomer when it comes to AI usage. According to the most recent Industrial Digitalization Index, published by McKinsey Global Institute Findings in 2016, the United States construction sector is one of the least digitized industries, only ahead of agriculture and hunting. There are a few difficult barriers to AI adoption in construction: companies may have an aversion to high-risk and reactive investment approaches or be wary of a lack of standards, skills gaps, cultural resistance, and the overall complexity of the segment. But when trying to track trends, simply follow the money: between 2020 and 2022, global investments in AEC techs grew to $50 billion, suggesting an imminent spike in adoption.

Academic studies and articles show usable applications for AI through all phases in a project lifecycle. Marzouk and Abubakr investigated the application of genetic algorithms to identify the most efficient positioning of cranes at a construction site; Edirisinghe’s study used real-time location sensors to investigate the ergonomics of construction workers, focusing on analyzing their behavior during materials handling to identify potential hazards. Jiang et al. presented a methodology for using AI to detect and classify concrete damages.

Economic and scientific advancements are necessary for a digital transformation
to happen, but they aren’t the only bump in the road. No massive shift can take place without widespread acceptance, and the rollercoaster ride that is the human reaction to AI is a bumpy one.

At first, people are delighted by the technological advances and possibilities, but that initial excitement is often smothered by worries of being replaced. People may feel dismayed or desperate until they realize AI tools are not yet mature enough to take the place of a human. Eventually, most can accept and adapt to the new AI reality, realizing that AI is a tool which can make our jobs easier without removing the human element entirely.

Cyber security in an AI era


In today’s digital landscape, cybersecurity is a critical concern, with global
cybercrime costs projected to reach $10.5 trillion annually by 2025, according to Cybersecurity Ventures. This staggering figure highlights the increasing frequency and sophistication of cyberattacks. As AI systems become more widespread, they bring specific cybersecurity risks, including data poisoning, malicious models, model evasion, and model theft. Data poisoning occurs when attackers insert harmful data into AI training sets, leading to faulty decisions. Malicious models are intentionally designed to produce harmful or biased results. Model evasion involves manipulating inputs to trick AI into making incorrect decisions, while model theft occurs when an attacker steals or replicates an AI model, risking intellectual property loss and compromised defenses. Understanding and mitigating these risks is essential as AI plays a larger role in critical systems.


Beyond cybersecurity, the use of AI also raises significant legal implications, particularly concerning data privacy. Regulations like GDPR and CCPA enforce strict rules on how personal data is managed. AI models can also unintentionally perpetuate biases, leading to legal challenges. Protecting the intellectual property of AI innovations and ensuring transparency in AI decisions are crucial to maintaining trust and legal compliance. To address these concerns, the AI and Legal teams at DG are actively working on comprehensive AI and Data policies, which will be available soon to guide our practices and ensure regulatory compliance.


Cybersecurity might seem complex, but a few commonsense rules can help keep you and your information safe. Here are some tips from DG IT’s Ryan MacGillivary to help you stay secure:

  • Don’t share personal information! Avoid sharing sensitive details like full name, address, phone number, or financial information. You don’t know how securely the data is stored or who can access it.
  • Beware of fake plug-ins and apps! Fake AI apps can steal sensitive data when installed. A fake ChatGPT plug-in in the Google Webstore stole Facebook credentials and was downloaded millions of times before it was removed.
  • Don’t give business information to AI! Don’t upload process flows, network diagrams, or code snippets. The next user may get your information as ChatGPT output. Any confidential data provided to AI is liable to be leaked.
  • Watch out for fakes! Criminals use AI to fake letters, emails, or even phone calls to scam and extort victims. Watch for telltale signs of AI on pictures, like  ismatched extremities or shadows and lighting that don’t make sense.
  • Be nice to AI! If you treat the AI respectfully, you might make it when Skynet takes over.

AI at DG


So, how is DG engaging in the AI revolution? Currently, we are focused on two projects, both of which aim to speed up and improve the quality of our work.

During proposal generation and the preliminary phases of design, we create man-hour and project cost estimations—often a complicated procedure, especially for less experienced employees. To assist with this task, we are working on a project database and a cost and man-hour preliminary estimation tool. The tool will compare the new project with those in the database, considering parameters like size, client, location, etc., to improve and optimize our estimations. Our second project, “Code Experts,” focuses on developing a chat to support researching international codes, thus accelerating the design process and requirements confirmation.


There are so many more ways we can utilize AI to assist and improve our work, but first we need to overcome one of the biggest challenges of AI implementation: data quality and availability. AI models rely on vast amounts of clean, well-organized data. We need to clean up our current data and utilize a data management system before we’ll be able to power AI tools.

AI has the potential to bring significant benefits to our company. Yes, it can bring speed, quality and design support, but it can also enhance our internal processes, workflows, and even support with decision making. AI and automation are reshaping our industry, driving efficiency, improving safety, and unlocking new possibilities in design and construction.


As these technologies continue to evolve, it’s crucial for all of us to embrace the changes and see how they can enhance both our individual roles and the company’s overall performance. Innovation often starts with a simple idea, and your perspective could help shape the future of AI at our company. Together, we can harness the power of AI to make a lasting impact on both our company and the industry.


Also of Interest