Published OnJanuary 24, 2025
Part 2 - Episode 11.2: AI and Human Collaboration in Cybersecurity
Both Sides of Artificial Intelligence in Cybersecurity - Good vs EvilBoth Sides of Artificial Intelligence in Cybersecurity - Good vs Evil

Part 2 - Episode 11.2: AI and Human Collaboration in Cybersecurity

In the ever-changing landscape of cybersecurity, continuous improvement through feedback loops has become a crucial strategy for enhancing defenses and maintaining resilience against evolving threats. Feedback loops allow organizations to learn from past incidents, adapt to emerging challenges, and ensure their cybersecurity measures remain effective and relevant.

Chapter 1

Introduction: The Power of Feedback Loops in Cybersecurity

Liz

“Welcome back, everyone, to Episode 11.2 of our podcast, I’m Liz, your host for today’s discussion, and joining me, as always, is our insightful co-host, Bob. Together, we’ll continue exploring one of the most fascinating and impactful dynamics in cybersecurity today—AI and human collaboration. Bob, how are you feeling about today’s topic?”

Bob Collier

“Excited, Liz! It’s always rewarding to dive into the practical ways humans and AI are reshaping cybersecurity. And today’s focus—feedback loops—is a game-changer. This concept connects directly to how we learn, adapt, and improve in the face of ever-evolving cyber threats.”

Liz

“Absolutely, Bob. Before we get into the heart of today’s episode, let’s quickly recap Episode 11.1. We talked about the unique strengths that humans and AI bring to the table, and how combining them creates a synergy that enhances decision-making, speeds up detection, and strengthens responses. If you haven’t caught that episode yet, I highly recommend you go back and listen—it sets the stage perfectly for today’s discussion.”

Bob Collier

“And speaking of today, we’re zooming in on —a vital mechanism for building resilient cybersecurity defenses. Whether it’s learning from past incidents, fine-tuning AI models, or adapting employee training programs, feedback loops allow organizations to continuously evolve and stay one step ahead of threats.”

Liz

“Right, Bob. Here’s the key question we’ll be answering throughout this episode: Understanding this will help organizations not only defend against threats but also create a culture of proactive growth and learning.”

Bob Collier

“We’ll also be looking at some real-world examples, from case studies of phishing incidents to how organizations use AI to adapt in real-time. So, whether you’re an IT professional, a cybersecurity enthusiast, or someone curious about how AI is reshaping industries, this episode has something for you.”

Liz

“Let’s dive in and start by breaking down what feedback loops are and why they’re so important in cybersecurity. Ready, Bob?”

Bob Collier

“Let’s do it, Liz!”

Chapter 2

Segment 1: Understanding Feedback Loops in Cybersecurity

Liz

“Let’s start with the basics—what exactly are feedback loops, and why are they so critical in cybersecurity? Bob, can you break it down for us?”

Bob Collier

“Of course, Liz. At its core, a feedback loop is a system where the outputs of a process are fed back into the system as inputs. In cybersecurity, this concept plays a pivotal role because threats are constantly evolving. Feedback loops allow organizations to adapt, refine, and improve their defenses in real-time. Think of it as a cycle of learning and improvement—detect, analyze, adjust, and repeat.”

Liz

“That’s a great way to put it, Bob. What’s powerful about feedback loops is that they’re not static—they’re dynamic and iterative, meaning they help systems evolve as they encounter new challenges. Can you share some practical examples to bring this to life?”

Bob Collier

“Sure. One simple example is how organizations handle phishing attacks. When a phishing email bypasses filters and is reported by an employee, the feedback loop begins. The IT team investigates the email, updates the filtering system to recognize similar threats in the future, and uses the incident as a training opportunity for employees. Each step strengthens the organization’s defenses.”

Liz

“That’s a fantastic example, Bob. It highlights how both humans and technology play a role in the feedback loop. But feedback loops also work on a much larger scale, right?”

Bob Collier

“Exactly. Let’s consider real-time monitoring tools, like intrusion detection systems. These tools continuously collect data, flag potential threats, and send alerts to cybersecurity teams. Based on the team’s analysis, the tools can be recalibrated—like adjusting thresholds to reduce false positives or updating rules to identify new attack patterns. The system becomes smarter with every iteration.”

Liz

“Do you have some examples?”

Bob Collier

“Definitely. Here are two examples:...        1.      Take the case of a company that fell victim to a ransomware attack. Post-incident analysis revealed that the attack exploited an unpatched vulnerability. The organization used this feedback to tighten its patch management process, implement automated updates, and conduct regular vulnerability assessments. Over time, these measures significantly reduced their exposure to similar threats.  ....      2.      Another example comes from industries like e-commerce, where real-time monitoring is critical. When a sudden spike in login attempts signals a possible credential-stuffing attack, the system can quickly implement countermeasures—like temporarily locking accounts, increasing CAPTCHA challenges, and notifying users. The feedback from monitoring enables rapid adjustments to thwart the attack as it unfolds.”

Liz

“Both examples show how feedback loops turn challenges into opportunities for growth. They also highlight the balance between technology and human insight. AI systems, for instance, might flag anomalies, but it’s often human expertise that determines the appropriate response. Would you agree, Bob?”

Bob Collier

“Absolutely. The best results come from collaboration. Feedback loops work best when humans and AI work hand-in-hand, with each side amplifying the other’s strengths.”

Liz

“That’s a perfect segue into our next segment, where we’ll explore how feedback loops strengthen specific areas of cybersecurity. But before we move on, here’s a question for our listeners: Can you think of a recent security incident at your organization where feedback led to improvement? Share your thoughts in the comments or discussion forum.”

Bob Collier

Great call-out Liz!

Chapter 3

Segment 2: Strengthening Cybersecurity with Feedback

Liz

“Now that we’ve covered what feedback loops are and how they work, let’s look at how they can be applied to strengthen cybersecurity practices. Feedback isn’t just a nice-to-have; it’s a strategic tool that can transform how organizations respond to threats. Bob, where should we start?”

Bob Collier

“Let’s begin with threat intelligence. Organizations today are bombarded with data about potential threats—everything from malware signatures to ransomware trends. The challenge lies in using that data effectively. Feedback loops help organizations integrate real-time threat intelligence into their defenses, enabling them to act faster and more accurately.”

Liz

“That’s a great point, Bob. Could you walk us through an example of how this works in practice?”

Bob Collier

“Sure. Imagine a retail company facing a surge in ransomware attacks targeting their sector. By analyzing threat intelligence, they identify specific vulnerabilities being exploited. Using this feedback, they implement stronger endpoint protections, update their firewalls, and roll out security patches to critical systems. The next time a similar attack surfaces, their systems are ready to defend against it.”

Liz

“So, it’s about turning raw data into actionable insights and ensuring that the lessons learned are incorporated into future defenses. Threat intelligence isn’t static—it’s dynamic, and feedback loops keep it relevant.”

Bob Collier

“That is exactly right Liz…”

Liz

“Another critical area where feedback loops shine is post-incident reviews. These are like the debrief sessions after a major event. Bob, why are these reviews so important in cybersecurity?”

Bob Collier

“They’re essential, Liz. Post-incident reviews allow organizations to dissect what happened, why it happened, and how it can be prevented in the future. The insights gained from these reviews feed directly into policies, processes, and technologies. Without this step, organizations risk repeating the same mistakes.”

Liz

“Do you have an example that illustrates the value of post-incident reviews?”

Bob Collier

“Absolutely. Let’s look at a financial institution that experienced a data breach due to an outdated software version. The post-incident review revealed that the organization lacked a robust patch management process. As a result, they established a new policy for weekly updates and automated patch deployment. Over time, this reduced their exposure to vulnerabilities and improved their overall security posture.”

Liz

That’s a powerful example, Bob. It shows how even a challenging event like a data breach can drive meaningful change when feedback is used effectively.”

Bob Collier

“Thanks Liz…”

Liz

“Now, let’s talk about the human element—employee training. We all know that employees are often the first line of defense against cyber threats. How can feedback loops make training more effective?”

Bob Collier

“Great question, Liz. Feedback from real-world scenarios is a goldmine for improving training programs. For instance, after running a phishing simulation, an organization might notice that employees consistently fall for certain types of emails—like those impersonating HR or IT support. This feedback can inform the design of future training sessions, focusing on those specific attack vectors.”

Liz

“That makes sense. Instead of generic training, it becomes tailored and targeted, addressing the actual weaknesses within the organization. What are the benefits of this approach?”

Bob Collier

“Tailored training increases employee engagement and retention of information. It also fosters a culture of continuous improvement. Employees feel more confident recognizing threats, and the organization benefits from fewer incidents caused by human error.”

Liz

“That’s a win-win situation, Bob. It reminds me of a quote: ‘Feedback is the breakfast of champions.’ In cybersecurity, it’s clear that feedback loops are the key to staying ahead of the curve.”

Bob Collier

“Exactly, Liz. By leveraging threat intelligence, conducting thorough post-incident reviews, and enhancing training programs, organizations can turn feedback into a powerful tool for resilience. And it’s not just about reacting to threats—it’s about proactively preventing them.”

Liz

“Well said, Bob. Next up, we’ll dive into the role of AI in driving continuous improvement through feedback. But before we move on, here’s a question for our listeners: How does your organization use feedback to improve training or incident response? Share your insights in the comments or discussion forum.”

Bob Collier

“Great Question Liz”

Chapter 4

Segment 3: Role of AI in Continuous Improvement

Liz

“Now that we’ve explored how feedback loops strengthen cybersecurity, let’s zoom in on the role of AI. AI thrives on feedback—it’s what makes these systems smarter, faster, and more effective over time. Bob, how would you summarize AI’s role in this cycle?”

Bob Collier

“Great question, Liz. AI isn’t just a tool; it’s a dynamic partner in cybersecurity. It uses feedback to learn, adapt, and evolve. From reducing false positives in threat detection to refining workflows, AI leverages feedback to drive continuous improvement. Let’s dive deeper into how this works in practice.”

Liz

“Let’s start with one of AI’s superpowers—machine learning. Bob, how do machine learning models adapt through feedback?”

Bob Collier

“Machine learning models are built to learn from data, but they don’t stop learning after deployment. They continuously analyze new data, incorporate feedback from analysts, and refine their algorithms. For example, intrusion detection systems often flag potential threats, but not all flagged events are actual incidents. When a security analyst reviews a false positive and labels it as such, that feedback is used to improve the model. Over time, the system becomes better at distinguishing real threats from benign activity.”

Liz

“So, the model doesn’t just learn from past incidents—it learns from every interaction. That’s powerful. Do you have an example of this in action?”

Bob Collier

“Sure. A large healthcare provider implemented an AI-driven intrusion detection system. Initially, it flagged a high number of false positives, overwhelming the security team. By incorporating feedback from analysts—what was real, what wasn’t—the system refined its detection capabilities. Within six months, false positives dropped by 40%, allowing the team to focus on real threats.”

Liz

“Another area where AI shines is performance measurement. Metrics like Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) are critical in cybersecurity. How does AI help improve these metrics, Bob?”

Bob Collier

“AI excels at both measurement and optimization. By analyzing feedback loops, AI identifies patterns and bottlenecks in detection and response processes. For example, if a specific type of alert consistently takes longer to resolve, AI can suggest workflow adjustments or automated actions to speed up response times. This leads to faster detection and remediation of threats, directly improving MTTD and MTTR.”

Liz

“That’s fascinating. It sounds like AI not only measures performance but also drives improvements based on those insights. What about actionable feedback for teams?”

Bob Collier

“Exactly. AI can generate dashboards or reports highlighting areas for improvement. For instance, it might suggest prioritizing certain vulnerabilities based on their risk level or flagging inefficiencies in the response process. These insights help teams focus their efforts where it matters most.”

Liz

“Now let’s talk about futureproofing—one of the most exciting aspects of AI. Bob, how does feedback from simulations, penetration tests, and red team exercises enhance AI capabilities?”

Bob Collier

“Great question, Liz. AI thrives on diverse and challenging data. Feedback from tests like these exposes the system to potential gaps in defenses. For example, a red team exercise might simulate a DDoS attack that overwhelms a network. By analyzing how the system and team respond, AI can suggest optimizations—whether that’s adjusting traffic filtering rules, enhancing load balancing, or preparing automated responses for similar scenarios.”

Liz

“That’s incredible. It’s like training an athlete—you identify weaknesses, adjust the training plan, and become stronger. Do you have a specific case study to share?”

Bob Collier

“Definitely. A global e-commerce company ran a series of penetration tests to identify vulnerabilities in their payment system. The feedback revealed a weak point in their API security. Using AI, they implemented adaptive security measures that monitored and responded to unusual activity in real-time. When another simulated attack was attempted a month later, the system successfully mitigated it without human intervention.”

Liz

“Bob, what I’m hearing is that AI doesn’t just replace human efforts—it enhances them. This concept of ‘collaborative intelligence’ keeps coming up. Can you expand on that?”

Bob Collier

“Absolutely, Liz. Collaborative intelligence is about leveraging the best of both worlds. AI processes massive amounts of data and identifies patterns faster than humans ever could. But human analysts provide context, creativity, and ethical oversight. Feedback loops ensure that both sides continuously learn from each other. For instance, if AI misses a subtle phishing attempt, the human analyst’s correction feeds back into the system, making it better at catching similar threats in the future.”

Liz

“That’s such an exciting vision of the future, Bob—a world where AI and humans work hand-in-hand to build stronger defenses. Next, we’ll discuss the ethical dimensions of AI implementation in cybersecurity, but before we move on, here’s a thought for our listeners: How does your organization use AI to enhance cybersecurity? Share your thoughts in the discussion forum or comments.”

Bob Collier

“Big Vision…”

Chapter 5

Segment 4: Ethical Dimensions of AI Implementation

Liz

“Now that we’ve explored how AI leverages feedback loops to strengthen cybersecurity, it’s time to discuss an equally important aspect: ethics. While AI brings immense potential, its implementation in cybersecurity raises crucial ethical questions. Bob, where should we start?”

Bob Collier

“Ethics is at the heart of responsible AI use, Liz. It’s not just about what AI can do—it’s about what it do. Organizations must navigate issues like data privacy, bias, transparency, and accountability to ensure their AI systems align with both organizational values and societal norms. Let’s break this down.”

Liz

“Let’s start with the human role. Bob, how do people influence the ethical design and deployment of AI?”

Bob Collier

“Humans play a critical role in shaping AI, from designing algorithms to training models. Ethical frameworks need to be built into AI systems from the ground up. This includes ensuring that data used to train AI is representative and unbiased. For example, if a threat detection system is trained only on data from specific industries, it might fail to recognize threats in others. Humans must oversee these processes to identify and mitigate such gaps.”

Liz

“That’s a great point. What about the challenge of balancing automation with human oversight? Is there a risk of over-reliance on AI?”

Bob Collier

“Absolutely, Liz. While AI can automate many tasks, humans need to remain in the loop for critical decisions, especially when ethical dilemmas arise. For example, if AI recommends blocking a user based on suspicious activity, human analysts should validate whether the action is justified or if it’s a false positive. This ensures fairness and prevents unwarranted disruptions.”

Liz

“Another key area is data—AI thrives on it, but it also raises questions about how data is collected, stored, and used. Bob, what are the ethical considerations here?”

Bob Collier

“Transparency is crucial, Liz. Organizations need to clearly communicate how they collect and use data, especially when it’s personal or sensitive. For example, if an organization uses AI to monitor employee activity for cybersecurity purposes, employees should be informed about what’s being monitored, why, and how the data is being protected. Opt-in policies and clear consent mechanisms are essential to building trust.”

Liz

“That makes sense. Can you give an example of responsible data use in AI-driven cybersecurity?”

Bob Collier

“Sure. A financial institution implemented an AI system to detect insider threats. Instead of monitoring all employee activity indiscriminately, they focused on specific behaviors linked to security risks, such as unauthorized access attempts or large data transfers. By narrowing the scope and anonymizing data whenever possible, they protected employee privacy while maintaining security.”

Liz

“Now, let’s talk about foresight. Ethical AI also involves anticipating risks and preparing for worst-case scenarios. Bob, how do organizations approach this?”

Bob Collier

“Scenario planning is a critical part of ethical AI implementation. Organizations need to identify potential misuse of AI systems and create safeguards. For instance, imagine an AI-powered threat detection system being manipulated by feeding it misleading data. Organizations must simulate such scenarios to understand vulnerabilities and strengthen their defenses. This includes testing for adversarial attacks on AI models.”

Liz

“That’s fascinating. Can you share a real-world example where scenario planning revealed an ethical challenge?”

Bob Collier

“Definitely. A tech company developing an AI system for facial recognition discovered during testing that it struggled with accuracy for certain demographics. By addressing this early—retraining the model with more diverse data—they avoided deploying a biased system. This type of proactive planning ensures AI systems are fair and effective before they’re fully implemented.”

Liz

“It seems like ethical AI is about finding a balance between leveraging technology and safeguarding human values. What are some best practices for organizations to strike this balance?”

Bob Collier

“First, organizations should establish clear ethical guidelines for AI development and use. This includes defining acceptable use cases, accountability structures, and processes for addressing ethical concerns. Second, fostering cross-functional collaboration is key—bringing together technologists, legal experts, and ethicists to evaluate AI systems. Lastly, continuous monitoring and auditing ensure AI systems remain aligned with ethical standards over time.”

Liz

“Before we wrap up this segment, let’s talk about the bigger picture. How can organizations work together to promote ethical AI?”

Bob Collier

“Collaboration is essential, Liz. Industry-wide partnerships can help set shared standards for ethical AI. For instance, initiatives like the Partnership on AI bring together companies, researchers, and nonprofits to address ethical challenges. Governments and regulatory bodies also play a role in establishing clear rules for AI use, ensuring accountability and protecting citizens.”

Liz

“That’s inspiring, Bob. It reminds us that ethical AI isn’t just a technical issue—it’s a collective responsibility. Up next, we’ll discuss the challenges and opportunities for AI-human collaboration in cybersecurity. But before we move on, here’s a thought for our listeners: How does your organization address the ethical implications of AI in cybersecurity? Share your ideas in the discussion forum or comments.”

Bob Collier

“Next we will cover overcoming barriers”

Chapter 6

Segment 5: Challenges and the Path Ahead

Liz

“As we near the conclusion of this episode, it’s time to address the challenges organizations face in achieving seamless AI-human collaboration in cybersecurity. While the potential is immense, there are obstacles to overcome and opportunities to explore. Bob, where should we start?”

Bob Collier

“Why don’t you pick Liz…”

Liz

“Let’s begin with the challenges. Bob, what are the most significant barriers organizations face when implementing AI in cybersecurity?”

Bob Collier

“Great question, Liz. There are several key barriers:        1.      Many organizations lack the skilled personnel needed to deploy and manage AI systems effectively. Cybersecurity professionals may not have experience with AI, while data scientists might not understand the intricacies of cybersecurity. Bridging this gap requires cross-disciplinary training and collaboration....        2.      Another challenge is trust. AI systems can sometimes be seen as ‘black boxes,’ where decisions or alerts aren’t easily explained. This lack of transparency can lead to skepticism among users, who might hesitate to rely on AI-driven insights....        3.      Finally, there’s a risk of over-reliance on AI. While AI is powerful, it’s not infallible. Over-dependence can leave organizations vulnerable if the system encounters unexpected scenarios or adversarial attacks.”

Liz

“Those are significant hurdles. What steps can organizations take to address these barriers?”

Bob Collier

“To tackle these challenges: ...               Invest in training programs that teach cybersecurity professionals the basics of AI and help AI experts understand security principles. Collaboration between teams is essential....               Implement tools that make AI decisions more transparent and easier to understand. For instance, visualizing why an anomaly detection system flagged specific activity builds trust....                                Organizations must strike a balance by using AI as an assistant rather than a replacement. Regular audits and human oversight ensure the system remains effective and ethical.”

Liz

“Collaboration is a recurring theme in our discussions. How can organizations foster a culture of collaboration to maximize the benefits of AI-human synergy?”

Bob Collier

“Collaboration starts with breaking down silos between departments. Cybersecurity, IT, legal, and HR teams need to work together to share insights and align on objectives. For example:...                Sharing threat intelligence across departments can provide a more comprehensive view of risks and improve response strategies.....                Bringing different teams together for AI and cybersecurity training fosters a shared understanding and stronger relationships....                                Establishing committees that include representatives from various teams can help oversee AI deployments and ensure they align with organizational values and ethical standards.”

Liz

“Those are fantastic strategies. Collaboration isn’t just about tools or systems—it’s about people coming together to address challenges holistically.”

Bob Collier

“Great call-out Liz…”

Liz

“Let’s shift gears and talk about the future. Bob, where do you see AI-human collaboration heading in the next few years, particularly in cybersecurity?”

Bob Collier

“The future is all about adaptability and integration, Liz. Here’s what I envision:...        1.      AI systems will become more intuitive, adapting to the needs of analysts and providing real-time, actionable insights without overwhelming them with data....        2.      AI will enable organizations to move from reactive to proactive security measures. For example, AI-driven simulations and predictive analytics will help identify potential vulnerabilities before they’re exploited....        3.      The next generation of AI tools will focus on usability, making it easier for non-technical users to interact with and understand AI-driven systems....        4.      AI will evolve alongside threats, becoming increasingly effective at learning from diverse scenarios, including adversarial attacks. These systems will not only detect threats but also anticipate new attack vectors.”

Liz

“Bob, that’s an exciting vision for the future. But as we move forward, what’s your advice for organizations looking to overcome today’s challenges and prepare for tomorrow?”

Bob Collier

“My advice is simple: Start small but think big. Begin by incorporating AI into specific areas of your cybersecurity strategy, such as threat detection or vulnerability management. Use feedback loops to refine these systems and build confidence in their capabilities. At the same time, invest in the skills, tools, and collaborations needed to scale AI adoption responsibly.”

Liz

“That’s great advice, Bob. Before we wrap up, let’s leave our listeners with some final thoughts…

Bob Collier

“Absolutely Liz”

Chapter 7

Conclusion: The Strength in Synergy

Liz

“What an incredible discussion today, Bob. We’ve covered so much ground, from the mechanics of feedback loops to the role of AI in continuous improvement, ethical considerations, and the challenges that lie ahead. But if there’s one central theme that stands out, it’s this: true strength in cybersecurity comes from the synergy between humans and AI.”

Bob Collier

“Absolutely, Liz. Let’s recap some of the key takeaways for our listeners:...        1.      Feedback loops are essential for iterative improvements, allowing organizations to learn from incidents, refine their defenses, and adapt to evolving threats....        2.      AI excels at processing vast amounts of data and identifying patterns, while humans bring context, creativity, and ethical oversight. Together, they create a powerful, complementary defense system....        3.      Building trust and transparency in AI systems is critical. Ethical frameworks, responsible data use, and human oversight ensure AI serves the organization and its stakeholders responsibly....                 4.             Skill gaps, trust issues, and the risk of over-reliance on automation are challenges that can be overcome through training, collaboration, and thoughtful implementation.”

Liz

“Well said, Bob. These points remind us that cybersecurity isn’t just about technology—it’s about people, processes, and purpose. When we combine the best of human intelligence with the power of AI, the results can be truly transformative.”

Bob Collier

“Thanks Liz”

Liz

“As we wrap up, we’d like to encourage all our listeners to reflect on what they’ve learned today and take action:...                Look for opportunities to integrate AI into your cybersecurity processes, even if it’s just for one specific use case....                Bring your teams together to share insights and explore how AI can enhance your existing workflows....                                Share your thoughts, questions, and ideas about today’s episode in the discussion forum or comments. The more we share, the more we grow as a community.”

Bob Collier

“And remember, cybersecurity is a journey, not a destination. Continuous improvement through feedback, collaboration, and ethical responsibility will keep your organization resilient in the face of ever-changing threats.”

Liz

“Looking ahead, we’re excited to bring you Episode 12: In that episode, we’ll explore how AI is shifting the paradigm from reactive defenses to proactive strategies that stop threats before they happen.”

Bob Collier

“You won’t want to miss it, so be sure to subscribe and join us for the next discussion. And as always, we’d love to hear your feedback on today’s episode. Your insights help us grow and bring you even better content.”

Liz

“Thanks for tuning in to Together, let’s continue building smarter, stronger, and more ethical defenses. Until next time, stay secure and keep learning!”

Bob Collier

“Goodbye, everyone, and remember—the strength in synergy is the key to a safer digital world.”

About the podcast

Artificial Intelligence (AI) is a branch of computer science focused on creating machines or software that can perform tasks traditionally requiring human intelligence. These tasks include understanding language, recognizing patterns, solving problems, making decisions, and even adapting based on experience. AI systems simulate human cognitive processes, using vast amounts of data to learn and make informed predictions or actions. Here's a deeper look into what AI encompasses and how it functions

This podcast is brought to you by Jellypod, Inc.

© 2025 All rights reserved.