Many organizations have fragmented information that is dispersed across legacy and modern databases. Streamlining such data can become laborious, requiring coordination across multiple system owners, while ensuring data quality remains high and error free. In addition, with corporate infrastructure rarely operating on a flat network, the problem is further compounded by complexity. Maintaining a balance between such challenges can be a delicate process.
In a bid to tackle these issues, Robotic Process Automation (RPA) has become more attractive to organizations due to its ability to optimize both legacy and current information systems. Labor-intensive tasks can be automated entirely by RPA digital agents, with pristine precision — freeing up staff to add value to alternative areas of the business.
Balancing Automation With Human Expertise
Unlike robots, humans are able to provide creative input, increase quality of service, and drive sales. No amount of automation will ever be as effective as the personal touch of a real human being for these types of tasks.
Digital agents, on the other hand, are designed to mimic established processes. They can be coached using machine learning to adapt to minor changes in process. This ensures defined processes continue relatively uninterrupted, with minor flexibility. By nature, humans are prone to deviations and affected by physical and emotional factors that don’t phase computers. However, when adding digital agents to enhance your workforce, and as part of business continuity planning, there are some important points to consider.
Accountability for Automated Actions
When tasks go wrong, who should be held responsible for the automated actions carried out by the digital agent? In line with common project management techniques, the individual who personally completes the task is responsible for its implementation. The degree of that responsibility is determined by the individual with the accountability.
The accountable person is the individual who is ultimately answerable for the activity or decision. This includes “yes” or “no” authority and veto powers, as well as assuring that only one person can be assigned to an action. It is often the case that an organizational information security policy (ISP) or acceptable use policy (AUP) will contain statements such as:
- It is the user’s responsibility to ensure their activities are compliant.
- The user is accountable and responsible for their activity while using corporate information systems.
- The user must be uniquely identifiable in the network to meet the accountability requirement.
These policies are intended to underpin data privacy legislation, and to facilitate legal, regulatory or audit frameworks — such as the frameworks described by ISO27001. However, an ISP or AUP is rarely written with digital agents in mind.
A digital agent may process or handle automated office procedures and business critical systems, such as pensions and payroll. Currently, robot workers cannot be considered liable, culpable, accountable, or responsible when errors occur. That’s why it is so important to clearly define who should be accountable and responsible for the digital agents within your team — including any work they carry out on an organization’s behalf.
The Ethics of Automation
There is a similar dilemma regarding the ethical arguments around the use of artificial intelligence (AI). Before deciding to add digital agents as part of the modern cognitive workplace, there are several items to consider — such as organization structure, accountability, user access, and monitoring.
Successful automation deployment often depends on understanding and devising solutions as early as possible. Unfortunately, these are new and complex topics, and the solution almost always depends on the circumstances, because every organization’s culture and resources are different. As the automation continues to develop, frameworks will expand and improve relevant guidelines. The important aspect is: A structure must be created and followed, or enhanced. Smarter organizations will use existing methodologies in the absence of those aligned to digital agents, while being mindful of any gaps caused by a new technology or process.
Implementing Automation With Intelligence
Risk appetite varies from one organization to the next, so there is no one “right” way for all companies to automate. While automation programs improve data quality spread across several systems, they can’t be a substitute for robust human management.
The first step before integrating automation into an existing process should involve information security risk teams. Subsequent steps should include training — so the new way of working can be established and implemented effectively. Finally, teams must be assured that introducing automation is not about replacing employees — rather it’s about supporting them and empowering them to take on more of the things that humans do better than machines.
Recorded Future’s combination of automated data collection and human analysis empowers our clients to supercharge security processes through automation. Check out our free e-book, “Beyond SOAR: 5 Ways to Automate Security With Intelligence” to learn more.