Application and Other Explanatory Material
Engagements which are not covered by this ASAE include financial reporting controls at a service organisation which are reported under ASAE 3402, including reasonable assurance reports on internal controls of Investor-directed portfolio services (IDPS) and IDPS-like services relating to specific annual investor statements as required by ASIC Class Orders. These service auditor’s reports may be used as evidence for the financial audit of a user entity under ASA 402.
See CO 13/763 Investor directed portfolio services and CO 13/762 Investor directed portfolio services provided though a registered managed investment scheme.
The primary purpose of an assurance engagement is the conduct of assurance procedures to provide an assurance conclusion. However, the assurance practitioner is not precluded from providing recommendations for improvements to controls in conjunction with or as a result of conducting an assurance engagement to report on controls.
The risks, control objectives and related controls addressed in an engagement under this ASAE may relate to any subject matter relevant to the entity. The subject matter can be any activity of the entity, whether a function or service, such as: compliance with legislation or regulation; financial reporting; management reporting; emissions and energy reporting; economy, efficiency and effectiveness or ethical conduct.
Controls are put in place by an entity to reduce to an acceptably low level the risks that threaten achievement of the entity’s control objectives. To implement effective controls, the entity needs to:
- identify or develop control objectives;
- identify the risks that threaten achievement of those control objectives;
- design and implement controls that would mitigate those risks, in all material respects, when operating effectively; and
- monitor the operation of those controls to determine whether they are operating effectively throughout the period.
Assurance engagements on controls are structured to suit the particular circumstances of the engagement and the needs of users, for example:
- Reports for internal use to assess whether the controls designed will achieve identified control objectives prior to implementation, may be restricted use reports on design and description of controls over a specific system or a report on design only in long-form, so that the controls as designed can be clearly identified.
- Reports for internal use to determine whether the implementation of new controls or controls within a new system was carried out satisfactorily as designed so that the controls are able to operate effectively, may be restricted use reports on design, description and implementation of controls or design and implementation in long-form if no description is available.
- Publicly available reports, such as a report for customers of cloud services to provide assurance with respect to IT security, including confidentiality, integrity and availability of IT resources relating to the services provided, presented in the short-form only, on design and operating effectiveness. Long-form reports may contain competitively sensitive information or information which undermines security as a result of the detailed description of tests of controls and deficiencies detected and may not be suitable for wide distribution.
- Reports on service organisation’s controls relevant to the security, availability, processing integrity, confidentiality or privacy of the information processed or stored for user entities in order for the user entity to be able to assess and manage the risks associated with outsourcing services provided to customers, will usually require a long-form restricted use report on design, description and operating effectiveness of controls, detailing the tests conducted and the results of those tests. The services provided by service organisations in these circumstances may include: cloud computing, managed IT security, customer on-line or telephonic support, sales force automation (order processing, information sharing, order tracking, contact management, customer management, sales forecast analysis or employee performance evaluation), health care or insurance claim management and processing or IT outsourcing services.
The primary practical difference for the assurance practitioner between an attestation and a direct engagement is the additional work effort for a direct engagement when planning the engagement and understanding the system and other engagement circumstances. In a direct engagement the assurance practitioner identifies, selects or develops the control objectives which address the purpose or overall objectives of the engagement and identifies the controls which are designed to achieve those objectives. This difference affects the assurance practitioner’s work effort in planning a direct engagement if the controls relevant to the control objectives have not been identified or documented and in understanding the entity’s system where a description of the system is not available.
In a three party relationship, which is an element of an assurance engagement, the responsible party may or may not be the engaging party, but is responsible for the controls which are the subject matter of the engagement and is a separate party from the intended users. The responsible party and the intended users may both be internal to the entity, for example if the responsible party is at an operational level of management and the intended users are at the level of those charged with governance, such as the Board or Audit Committee. See Appendix 2 for a discussion of how each of these roles relate to an assurance engagement on controls.
See Framework for Assurance Engagements.
Although, this ASAE does not apply to engagements on controls required to be conducted under ASAE 3402, an engagement may include combined reporting under this ASAE and ASAE 3402. A service organisation may agree by contractual arrangements with user entities to provide an assurance report on controls for the purposes of both providing evidence for user entities’ financial report audit and to satisfy user entities’ obligations to customers or employees. Consequently, the assurance report may contain a section prepared under ASAE 3402 which concludes on the operating effectiveness of controls at the service organisation that are likely to be relevant to user entities’ internal control as it relates to financial reporting and a section prepared under this ASAE which concludes on controls relevant to user entities’ operational needs, such as accessibility and availability of IT resources, or contractual commitments to customers or employees, such as security, confidentiality and privacy of personal information or health and safety of workers engaged to produce products supplied.
Components of control are defined by the control framework applied. For example the components of control may comprise:
- the COSO Framework components: the control environment, risk assessment, control activities, information and communication and monitoring activities;
- COBIT 5, framework for the governance and management of enterprise IT, enablers: principles, policies and frameworks; processes; organisational structures; culture, ethics and behaviour; information; services, infrastructure and applications; and people, skills and competencies;
- IT-enabled systems components:
- infrastructure – physical facilities, equipment, IT hardware and IT networks;
- software – IT operating system, software applications and utilities;
- people – IT developers, testing and implementation personnel, system and database administrators, operators, users and managers;
- procedures – automated and manual procedures involved in the system’s operation; and
- data – information processed, generated, stored, transmitted and managed, including transactions, files, messages, images, records, databases and tables.
In accepting an assurance engagement on controls, the assurance practitioner, in order to comply with relevant ethical requirements, considers whether the assurance practitioner has provided internal audit or consulting services with respect to the design or implementation of controls at the entity, as any such past or current engagements are likely to impact on the assurance practitioner’s independence and are likely to preclude acceptance of the engagement.
In a direct engagement, in order to establish whether the preconditions for an assurance engagement are present as required by ASAE 3000, circumstances may require the assurance practitioner to commence the assurance engagement to obtain information that the preconditions can be satisfied. If the assurance practitioner develops the control objectives for evaluating the design of controls, the assurance practitioner may not be able to determine if suitable criteria will be available until after the assurance engagement has commenced.
Competence and Capabilities to Perform the Engagement
Relevant competence and capabilities, including having sufficient time to perform the controls engagement, as required by ASAE 3000 by persons who are to perform the engagement, include matters such as the following:
- Knowledge of the relevant industry, controls framework, type of system and of the nature of the overall objective of the relevant controls (for example: financial reporting, emissions quantification or regulatory compliance).
- An understanding of IT and systems.
- Experience in evaluating risks as they relate to the suitable design of controls.
- Experience in the design and execution of tests of controls and the evaluation of the results.
When deciding whether to accept an engagement to report on the design, but not implementation of controls, or design and implementation of controls at a point in time, but not the operating effectiveness of controls over the period, the assurance practitioner considers whether the engagement has a rational purpose, as required when meeting the preconditions of an assurance engagement in accordance with ASAE 3000. An engagement on design only, may have a rational purpose if the controls designed have neither been implemented, nor are in operation. However, if the design has already been implemented or is in operation, then the assurance practitioner considers whether the purpose of the engagement is logical or if the assurance report may be misleading to users. For an engagement on design and implementation, if the controls are in operation, the assurance practitioner considers whether the assurance report is likely to meet the needs of users or may be misunderstood as providing assurance on operating effectiveness of controls. Nevertheless, it may be justifiable for the entity to seek assurance on the design of new controls prior to implementation or assurance on design and implementation of a change in controls, even if there are existing controls in operation.
When considering the acceptance of a limited assurance engagement on controls, ASAE 3000 requires the assurance practitioner to determine whether a meaningful level of assurance is expected to be able to be obtained, which may include whether a limited assurance engagement is likely to be meaningful to users. In making this assessment, the assurance practitioner considers the intended users of the assurance report and whether they are likely to understand the limitations of a limited assurance engagement, including the need to read the assurance report in detail to understand the assurance procedures performed and the assurance obtained.
The controls which are the subject matter of the engagement may be defined by:
- the component/s of control which they address, which are determined by the control framework applied, but may include:
- the control environment;
- risk assessment;
- control activities;
- information and communication; or
- monitoring activities;
- the system, being the function or service provided by that system; and
- the entity or facility boundaries.
Control objectives ordinarily comprise the main criteria for evaluation of the design of controls. In assessing the suitability of the criteria for evaluating the design of controls, the assurance practitioner considers whether the control objectives:
- Are specified by outside parties, such as a regulatory authority, a user group, or a professional body that follows a transparent due process, identified by the entity or identified by the assurance practitioner themselves.
- Address compliance requirements, specified by legislation, regulation or by contractual agreement.
- If identified by the entity, are complete and address each of the overall objectives relevant to the system, whether a function or service.
Additional criteria for assessing the suitability of the design may be derived from the risks that threaten achievement of the control objectives identified.
In a direct engagement, the assurance practitioner may not be provided with control objectives and so will need to identify, select or develop the control objectives to apply as the criteria for evaluating the design of controls. The assurance practitioner may either identify or select control objectives which have already been developed or develop the control objectives themselves. The work effort required by the assurance practitioner, when planning a direct engagement, in identifying suitable controls objectives as well as the related controls, is ordinarily substantially greater than for the equivalent attestation engagement.
The responsible party implicitly or explicitly makes assertions regarding the recognition, measurement, presentation, disclosure or compliance of the subject matter, which reflect the overall objectives of the system. These overall objectives can be applied in assessing the suitability of the specific control objectives to meet the needs of users. Overall objectives may be expressed in different terms under different frameworks, such as “key system attributes”, “goals” or “business requirements”, and may include:
- for transactions, activities and events over a period:
- cut-off and
- for volumes, amounts or balances as at a date:
- rights and obligations;
- completeness; and
- valuation and allocation.
- for presentation and disclosure in a report:
- occurrence and rights and obligations;
- classification and understandability;
- accuracy and valuation; and
- for performance of the system:
- efficiency; and
- for contractual obligations of a service organisation, providing IT, on-line or cloud services for virtual processing of information, communications or data and storage of data or information, over a period:
- accessibility and availability; and
- data integrity, including:
- timeliness; and
The materiality matrix in Appendix 4 plots these overall objectives to provide a frame of reference for assessing materiality.
The way in which the overall objectives, described above, are expressed will vary widely depending on the control framework applied or developed. For example COBIT 5 categorises “goals” for Enterprise IT as: intrinsic quality, contextual quality and access and security. APRA Prudential Practice Guide PPG 234 Management of security risk in information and information technology (1 February 2010) defines “security risk” as the potential compromise to: confidentiality (authorised access), integrity (completeness, accuracy and freedom from unauthorised change) and availability (accessibility and usability). The responsible party may apply whichever control framework is either, required by regulation or legislation, or, for a voluntary engagement, which represents suitable criteria for the evaluation of controls in the particular circumstances of the engagement.
In assessing the control objectives as suitable criteria for design of controls, if the scope of the engagement specifies overall control objectives then suitable criteria are specific control objectives which address each of those overall objectives.
Suitable criteria need to be identified by the parties to the engagement and agreed by the engaging party and the assurance practitioner. The assurance practitioner may need to discuss the criteria to be used with those charged with governance, management and the intended users of the report. Criteria can be either established or specifically developed. The assurance practitioner normally concludes that established criteria embodied in laws or regulations or issued by professional bodies, associations or other recognised authorities that follow due process are suitable when the criteria are consistent with the objective. Other criteria may be agreed to by the intended users of the assurance practitioner’s report, or a party entitled to act on their behalf, and may also be specifically developed for the engagement.
In situations where the criteria have been specifically developed for the engagement, including where the assurance practitioner develops or assists in developing suitable criteria, the assurance practitioner obtains from the intended users or a party entitled to act on their behalf, acknowledgment that the specifically developed criteria are sufficient for the user’s purposes.
Additional criteria that the assurance practitioner may consider when evaluating a description include, whether the description presents:
- the types of services provided, including, as appropriate, the nature of the data stored and/or information processed;
- the procedures by which data was recorded and stored and information was processed;
- how the system dealt with significant events and conditions;
- The process used to prepare reports for clients;
- relevant control objectives and controls designed to achieve those objectives;
- controls that the entity assumed, in the design of the system, would be implemented by clients, and which, if necessary to achieve control objectives, are identified in the description along with the specific control objectives that cannot be achieved by the entity alone;
- aspects of other components of control that are relevant to the system described;
- if the scope of the engagement is over a period, relevant details of changes to the system during the period; and
- information relevant to the scope of the system being described without distortion or omission, while acknowledging that the description is prepared to meet the needs of the identified users and may not, therefore, include every aspect of the system that each user may consider important in its own particular environment.
In assessing the suitability of the design of the controls as criteria for evaluating implementation of controls, the assurance practitioner may consider if the design encompasses:
- the extent of documentation, including manuals, instructions and policies, needed by those applying the controls to operate or monitor the controls as designed;
- the allocation of responsibilities for controls to enable the controls to be carried out;
- the method of communication with and training of those applying the controls sufficient for them to carry out manual controls so they operate as designed; and
- for IT enabled systems, an implementation plan for:
- the development, acquisition or outsourcing of IT systems, data storage, hardware and other infrastructure needed to meet the specifications required by the design of the controls; and
- the testing and delivery of IT systems sufficient to enable the IT controls to operate as designed.
The criteria may need to be amended during the engagement, if for example more information becomes available or the circumstances of the entity change. Any changes in the criteria are discussed with the engaging party and, if appropriate the intended users.
Even if the responsible party is not a party to the terms of the engagement, the assurance practitioner may seek to obtain the responsible party’s written agreement regarding their responsibilities as set out in paragraph 24, if practicable.
When agreeing whether the engagement is to be conducted as an attestation or direct engagement, the assurance practitioner considers factors such as whether:
- there is a regulatory requirement or users need an evaluation of the subject matter by the responsible party or evaluator;
- the entity has the resources and expertise to prepare a suitable description or documentation of the controls objectives and related controls and conduct a meaningful evaluation of those controls; or
- it is more cost effective for the entity to identify the specific control objectives and related controls, evaluate those controls as the basis for an attestation engagement, rather than it being necessary for the assurance practitioner to do so in a direct engagement.
When identifying the subject matter in the terms of engagement, the system is clearly defined. If the scope of the engagement is imposed by legislation, regulation or other requirement and does not explicitly include design, design is still implicit in the assurance practitioner’s conclusion on description, implementation or operating effectiveness of controls and the work undertaken will include evaluation of the suitability of the design to achieve the control objectives. If the controls are specified by regulation and there is no scope for the assurance practitioner to evaluate the design of those controls, then the assurance practitioner conducts the engagement as a compliance engagement under ASAE 3100.
The subject matter of an engagement conducted under this ASAE is controls which may be directed at a broad range of objectives of the entity. Categories of objectives may be defined by the control framework applied and may include: operations, reporting or compliance objectives. Operations may include performance objectives aimed at economy, efficiency and effectiveness. Reporting objectives may address financial reporting, management reporting or emissions and energy reporting. Compliance objectives may address regulatory, legislative, industry or contractual requirements.
The subject matter may be restricted to a system within the boundaries of the entity, location or operational facility.
The assurance practitioner considers the needs of users in agreeing the point in time or period to be covered by the assurance engagement, so that the report is not likely to be misleading.
If the criteria are control objectives which are available when agreeing the terms of engagement, they may be listed or attached to the engagement letter or other written terms. Otherwise the criteria may be expressed as overall objectives which may be broken down into detailed objectives as part of the engagement.
Whether the assurance practitioner is required to conclude on the design, description, implementation and/or operating effectiveness of controls in achieving overall objectives or specific control objectives will have a significant impact on the work effort required to reach a conclusion. Whether the criteria against which the assurance practitioner assesses controls are the overall control objectives or specific control objectives is determined when accepting the engagement and will depend on the information needs of users. If the conclusion is centred on achievement of overall objectives, then the assurance practitioner can focus the work effort on controls which are material to achieving those overall objectives. In contrast if the assurance report is required to conclude on each specific control objective and/or identified controls to achieve those objectives, then it will be necessary for the assurance practitioner to gather evidence in relation to each individual control objective and/or control identified so that the assurance practitioner can conclude at that level of detail. This is depicted in the table below.
|Conclusion expressed on:||Overall Control Objectives;||Specific Control Objectives||Control Procedures|
|Materiality based on:||Impact on Overall Control Objectives||Impact on Specific Control Objectives||Impact on each Control Procedure|
|Controls tested:||Controls necessary to mitigate the risks threatening overall objectives||Controls necessary to mitigate the risks threatening specific objectives||Control procedures|
When agreeing whether the report will be in long-form, including matters such as tests of controls and detailed findings, the assurance practitioner considers both the needs of users and the risks of users misunderstanding the context of the procedures conducted or the findings reported. A long-form report may be necessary for users whose assurance providers intend to use specific findings as evidence for an assurance engagement with respect to the user entity or to meet the information needs of a regulator. Reporting tests of controls and findings may be appropriate where the users are knowledgeable with respect to assurance and controls and, therefore, not likely to misinterpret those findings.
Example engagement letters are contained in Appendix 5.
If the assurance practitioner is engaged to report on the operating effectiveness of controls this fact is not a substitute for the responsible party’s own processes to provide a reasonable basis for its Statement on the outcome of the evaluation of controls. If the responsible party’s Statement claims that the controls related to the control objectives operated effectively throughout the period, this Statement may be based on the entity’s monitoring activities. Monitoring of controls is itself a component of control and is a process to assess the effectiveness of controls over time. It involves assessing the effectiveness of controls on a timely basis, identifying and reporting deficiencies to appropriate individuals within the entity, and taking necessary corrective actions. The entity accomplishes monitoring of controls through ongoing activities, separate evaluations, or a combination of both. The greater the degree and effectiveness of ongoing monitoring activities, the less need for separate evaluations. Ongoing monitoring activities are often built into the normal recurring activities of an entity and include regular management and supervisory activities. Internal auditors or personnel performing similar functions may contribute to the monitoring of an entity’s activities. Monitoring activities may also include using information communicated by external parties, such as customer complaints and regulator comments, which may indicate problems or highlight areas in need of improvement.
In order for an engagement to be conducted as an attestation engagement, the responsible party needs to be able to identify:
- the specific control objectives which address each overall control objective;
- the controls designed to achieve each of the specific control objectives; and
- the basis for the responsible party’s evaluation of the design, and/or implementation or operating effectiveness of controls, including documentation supporting the outcome of that evaluation.
If the responsible party cannot demonstrate that they have an adequate basis for their evaluation of controls, as reflected in the responsible party’s Statement, then the assurance practitioner may decide not to accept the engagement as an attestation engagement, but may accept the engagement as a direct engagement, if appropriate.
The adequacy of the basis for the responsible party’s evaluation of the controls reflected in the responsible party’s Statement, including appropriate documentation, will impact the assurance practitioner’s risk assessment. An example of a Statement is contained in Appendix 7, example 1.
A request to change the scope of the engagement may not have a reasonable justification when, for example, the request is made to exclude certain control objectives from the scope of the engagement because of the likelihood that the assurance practitioner’s conclusion would be modified or to reduce the level of assurance to be obtained from reasonable to limited due to a limitation in the available evidence.
A request to change the scope of the engagement may have a reasonable justification when, for example, the request is made to exclude from the engagement an outsourced activity when the entity cannot arrange for access by the assurance practitioner, and the method used for dealing with the services provided by that outsourced activity is changed from the inclusive method to the carve‑out method.
When developing the engagement plan, the assurance practitioner considers factors such as:
- matters affecting the industry in which the entity operates, for example economic conditions, laws and regulations, and technology;
- risks to which the entity is exposed that are relevant to the system being examined;
- the quality of the control environment within the entity and the role of the governing body, audit committee and internal audit function;
- knowledge of the entity’s internal control structure obtained during other engagements;
- the extent of recent changes if any, in the entity, its operations or its internal control structure;
- methods adopted by management to evaluate the effectiveness of the internal control structure;
- preliminary judgements about significant risk;
- the nature and extent of evidence likely to be available;
- the nature of control procedures relevant to the subject matter and their relationship to the internal control structure taken as a whole; and
- the assurance practitioner’s preliminary judgement about the effectiveness of the internal control structure taken as a whole and of the control procedures within the system.
In engagements for which a description of the system is not provided to the assurance practitioner, the assurance practitioner, in planning the engagement, identifies the controls in place through procedures such as enquiry, observation or examination of records or documentation. The assurance practitioner may do this in conjunction with evaluating the suitability of the design of controls to achieve the control objectives and these procedures may also provide evidence of the implementation or operating effectiveness of controls.
In a direct engagement, the responsible party is not required to identify specific control objectives, or evaluate whether controls are suitably designed to achieve those objectives and, if applicable, the description is fairly presented, the controls were implemented as designed or operating effectively. Consequently, in planning a direct engagement the assurance practitioner considers the additional work required to identify specific control objectives and related controls and any increased risk of deficiencies in the design of controls compared to an equivalent attestation engagement.
The process necessary to identify the overall control objectives, specific control objectives and controls relevant to the achievement of those objectives, will vary depending on the size and complexity of the entity or component which is being assured. In identifying, selecting or developing suitable control objectives, the assurance practitioner considers relevant regulation, industry or other requirements and which control objectives are likely to address users’ needs. Whilst the assurance practitioner needs to assess which controls are necessary to achieve the control objectives which they will be concluding upon as a basis for determining which controls to test, it does not necessarily need to be a complex process for a small entity or component. The manner in which the identification of control objectives and related controls is documented may range from a simple reference to a more complex matrix. An understanding of whether a control is relevant to the achievement of multiple control objectives or operates in combination with other controls to achieve a single control objective is necessary for the assurance practitioner in planning the controls testing and in evaluating the findings.
The assurance practitioner may decide to discuss elements of planning with management or other appropriate party when determining the scope of the engagement or to facilitate the conduct and management of the engagement (for example, to co-ordinate some of the planned procedures with the work of the entity’s personnel). Although these discussions often occur, the overall engagement strategy and the engagement plan remain the assurance practitioner’s responsibility. When discussing matters included in the overall engagement strategy or engagement plan, care is required in order not to compromise the effectiveness of the engagement. For example, discussing the nature and timing of detailed procedures with the entity may compromise the effectiveness of the engagement by making the procedures too predictable.
The assurance practitioner applies the same considerations in both limited assurance and reasonable assurance engagements regarding what represents a material control, since such judgements are not affected by the level of assurance being obtained.
The significance to users and the impact on the entity of the achievement of the control objectives provide a frame of reference for the assurance practitioner in considering materiality for the engagement. A materiality matrix may be used to plot the significance to users against the impact on the entity of the control objectives to be concluded upon, as an aid to identifying the material controls. An illustrative example of a materiality matrix is contained in Appendix 4.
In a controls engagement, the decisions of users are influenced by whether and the extent to which the control objectives are achieved, therefore the materiality of a control is dependent on the significance of that control in mitigating the risks which threaten achievement of control objectives. The assurance practitioner obtains an understanding of those risks with respect to the entity as a whole and the activity, function or location relevant to each control objective. Materiality of controls can be assessed in relation to the achievement of overall control objectives or specific control objectives, depending upon which matter the assurance practitioner will conclude on in the assurance practitioner’s report.
The assurance practitioner considers the materiality of the controls at the planning stage, reassesses materiality during the engagement based on the findings, and considers the materiality of any identified deficiencies in the design, misstatements in the description, deficiencies in implementation or deviations in the operating effectiveness of those controls.
Materiality of controls is primarily based on qualitative factors, such as:
- the significance of the control to achieving a control objective which is to be concluded upon;
- whether the control is pervasive in that it impacts on the achievement of multiple control objectives;
- the significance to users and impact on the entity or activity of the control objectives which the control seeks to achieve, such as the potential impact on reputation or market confidence which may result from a failure in the operation of the control;
- the existence of additional controls which address the same objective, that is the existence of mitigating or compensating controls;
- the extent to which the control permeates the business or activities of the entity, such as the impact of a control over a centralised function (for example computer security, central budgeting or human resource management) on other parts of the entity;
- users’ perceptions and/or interest in the system;
- the cost of alternative controls relative to their likely benefit; and
- the length of time an identified control was in existence.
Materiality with respect to the operating effectiveness of controls, may also be based on quantitative factors, in particular where the controls relate to activities expressed in volumes or values, such as:
- the total value of transactions, volume of relevant activity or quantity of the item or resource to which the control relates;
- the number of times the control is applied; or
- the economic impact of a control deficiency or deviation, including potential loss of income, increase in expenditure, foregone cost savings or efficiencies, fines or claims against the entity.
Obtaining an Understanding of the Entity’s System and Other Engagement Circumstances and Identifying and Assessing Risks of Material Misstatement (Ref: Para. 37-38)
The assurance practitioner’s understanding of the system, ordinarily, has a lesser depth for a limited assurance engagement than for a reasonable assurance engagement. The assurance practitioner’s procedures to obtain this understanding may include:
- Enquiring of those within the entity who, in the assurance practitioner’s judgement, may have relevant information.
- Observing operations.
- Inspecting documents, reports, printed and electronic records.
- Re-performing control procedures.
The nature and extent of procedures to gain this understanding are a matter for the assurance practitioner’s professional judgement and will depend on factors such as:
- the entity’s size and complexity;
- the nature of the system to be examined, including the objective(s) to which the control procedures are directed and the risk that those objectives will not be achieved;
- the extent to which IT is used; and
- the documentation available.
The extent to which an understanding of the IT controls is required, and the level of specialist skills necessary, will be affected by the complexity of the computer system, extent of computer use and importance to the entity, and the extent to which significant control procedures are incorporated into IT systems. The extent of specialist IT skills needed on the assurance team or the need to engage IT experts is identified or clarified during this planning stage.
As noted in paragraph 17(g), control objectives relate to risks that controls seek to mitigate. The entity is responsible for identifying the risks that threaten achievement of the control objectives which are either stated in the entity’s description of its system, Statement of the outcome of the evaluation of controls or agreed with the assurance practitioner in the terms of engagement and identified in the assurance report. The entity may have a formal or informal process for identifying relevant risks. A formal process may include estimating the significance of identified risks, assessing the likelihood of their occurrence, and designing controls to address them. However, since control objectives relate to risks that controls seek to mitigate, thoughtful identification of control objectives when designing and implementing the entity’s system may itself comprise an informal process for identifying relevant risks.
In practice, in an engagement where there is no description prepared by the responsible party, the assurance practitioner’s work in identifying the relevant control objectives to be addressed may help to formalise the risk assessment process.
Consideration of risks may need to go beyond the immediate system. For example, risks may arise as a result of matters which may influence behaviour, such as basis of remuneration, bonuses or the performance measures applied to employees. Factors such as time pressures for completion of processes or activities may result in circumvention of controls.
When identifying and assessing the risk of material control deficiencies or deviations, the assurance practitioner may consider the following factors:
- that it is unreasonable for the cost of a control to exceed the expected benefits to be derived;
- controls may be directed at routine rather than non‑routine transactions or events;
- the potential for human error due to carelessness, distraction or fatigue, misunderstanding of instructions and mistakes in judgement;
- inconsistency in operation of controls due to automated system interruptions or temporary change in staff due to absences or rotation of roles;
- the possibility of circumvention of controls through fraud, which may include the collusion of employees with one another or with parties outside the entity;
- the possibility that a person responsible for exercising a control could abuse that responsibility, for example, a member of management overriding a control procedure;
- the possibility that management may not be subject to the same controls applicable to other personnel; and
- the possibility that controls may become inadequate due to changes in conditions, such as computer systems or operational changes, and compliance with procedures may deteriorate.
Risks Arising from IT
The use of IT affects the way in which control activities are implemented. From the assurance practitioner’s perspective, controls over IT systems are effective when they maintain the security, confidentiality, privacy and integrity of the data which such systems process, generate and/or store, through both effective general IT controls and process controls, whilst still providing accessibility and availability of that data so that the operations of the entity are not impeded.
General IT controls are policies and procedures that relate to many software applications and support the effective functioning of process controls. Deficiencies in general IT controls can undermine the effectiveness of process controls and may render those process controls ineffective. General IT controls that maintain the security, confidentiality, privacy, integrity, accessibility and availability of data commonly include controls over the following:
- Data centres, network operations and cloud services.
- Acquisition, development, change management, testing, deployment and maintenance of:
- Technology infrastructure.
- Data management systems.
- System access and data transfer security and confidentiality.
- Business continuity, disaster recovery, backup and restoration.
They are generally implemented to deal with the risks referred to in paragraph A64 below.
Process controls are manual or automated procedures that typically operate at a business process level and apply to the processing of data by individual software applications. Process controls can be preventive or detective in nature and are designed to ensure the integrity, including completeness, accuracy, timeliness and authorisation, of the data. Accordingly, process controls relate to procedures used to initiate, record, process and report data or transactions. These controls help ensure that data or transactions occurred, are authorised, are completely and accurately recorded, processed in the correct period, within an appropriate timeframe and within required service levels. Examples include edit checks of input data, and numerical sequence checks with manual follow-up of exception reports or correction at the point of data entry.
Generally, IT benefits an entity’s internal control by enabling an entity to:
- consistently apply predefined criteria and perform complex calculations in processing large volumes of transactions or data;
- enhance the timeliness, accessibility, availability, and accuracy of information;
- facilitate the additional analysis of information;
- enhance the ability to monitor the performance of the entity’s activities and its policies and procedures;
- reduce the opportunity for controls to be circumvented; and
- enhance the ability to achieve effective segregation of duties by implementing security controls in software applications, databases, and operating systems.
IT also poses specific risks to an entity’s internal control, including, for example:
- reliance on systems or programs that are inaccurately processing data, processing inaccurate data, or both;
- unauthorised access to data that may result in breaches of confidentiality or privacy, deletion or manipulation of data, including the recording of unauthorised or non-existent data, or inaccurate recording of data. Particular risks may arise where multiple users access a common database;
- the possibility of personnel gaining access privileges beyond those necessary to perform their assigned duties thereby breaking down segregation of duties;
- unauthorised changes to data in master files;
- unauthorised changes to systems or programs;
- failure to make necessary changes or patches to systems or programs;
- inappropriate manual intervention; and
- potential loss of data or inability to access data as required.
Risks arising from Manual Controls
Manual elements in internal control may be more suitable where judgement and discretion are required such as for the following circumstances:
- Large, unusual or non-recurring transactions.
- Circumstances where errors are difficult to define, anticipate or predict.
- In changing circumstances that require a control response outside the scope of an existing automated control.
- In monitoring the effectiveness of automated controls.
Manual elements in internal control may be less reliable than automated elements because they can be more easily bypassed, ignored, or overridden and they are also more prone to simple errors and mistakes. Consistency of application of a manual control element cannot therefore be assumed. Manual control elements may be less suitable for the following circumstances:
- High volume or recurring transactions, or in situations where errors that can be anticipated or predicted can be prevented, or detected and corrected, by control parameters that are automated.
- Control activities where the specific ways to perform the control can be adequately designed and automated.
The scope of the engagement may require the assurance practitioner to conclude on only certain components of control, such as control activities, within the system and not provide a conclusion on the system as a whole. Nevertheless, the assurance practitioner gains an understanding of the strength of the controls as a whole and in doing so may identify deficiencies in the control environment. A deficiency in the control environment may undermine the effectiveness of controls and this is taken into account in determining the nature, timing and extent of assurance procedures to test the design, implementation and operating effectiveness of controls. For example, the assurance practitioner may consider the “tone at the top” including the entity’s track record of adherence to controls, and the monitoring activities, which may include the activities conducted by internal audit. If the control environment or other components of control are assessed as ineffective this will increase the risk of deviations in the operating effectiveness of controls, if included in the scope of the engagement, and impact the nature, timing and extent of assurance procedures.
The assurance practitioner obtains an understanding of the components of control to understand how they may impact the effectiveness of the component which is included in the scope of the engagement. This understanding of the control components may comprise the following:
- the control environment, including whether:
- management, with the oversight of those charged with governance, has created and maintained a culture of honesty and ethical behaviour; and
- the strengths in the control environment elements collectively provide an appropriate foundation for the other components of internal control, and whether those other components are not undermined by deficiencies in the control environment;
- risk assessment process, including whether the entity has a process for:
- identifying risks which threaten achievement of control objectives;
- estimating the significance of the risks;
- assessing the likelihood of their occurrence; and
- deciding about actions to address those risks;
- the information system and communication including the following areas:
- the entity’s operations that are significant to the system;
- the procedures, within both IT and manual systems, by which those functions and services are initiated, recorded, transmitted, processed, corrected as necessary, summarised and reported;
- the records and supporting information that are used to initiate, record, process and report on the system; this includes the correction of incorrect information and how information is summarised. The records may be in either manual or electronic form;
- how the information system captures events and conditions, that are significant to the system; and
- the reporting process used to prepare the entity’s reports relating to the system, including significant estimates and disclosures;
- control activities within the system, being those the assurance practitioner judges are necessary to understand in order to assess the risks of the control objectives not being achieved; and
- monitoring activities that the entity uses to monitor controls, including the role of the internal audit function, which mitigate the risks that threaten achievement of the control objectives, and how the entity initiates remedial actions to address deficiencies in design or implementation of controls or deviations in the operating effectiveness of its controls.
The division of internal control into the components, in paragraph A68(a)-(e) above, provides a useful framework for the discussion of different aspects of an entity’s internal control which may affect the engagement in this ASAE. However, this does not necessarily reflect how an entity designs, implements and maintains internal control, or how it may classify any particular component. The assurance practitioner may use different terminology or frameworks to describe the various aspects of internal control and their effect on the engagement.
Management is in a unique position to perpetrate fraud because of management’s ability to manipulate the entity’s records or prepare fraudulent reports by overriding controls that otherwise appear to be operating effectively. Although the level of risk of management override of controls will vary from entity to entity, the risk is nevertheless present in all entities. Due to the unpredictable way in which such override could occur, it is a risk that control objectives will not be achieved due to fraud and thus is a significant risk.
In obtaining an understanding of the system, including controls, the assurance practitioner determines whether the entity has an internal audit function and its effect on the controls within the system. The internal audit function ordinarily forms part of the entity’s internal control and governance structures. The responsibilities of the internal audit function may include, for example, monitoring of internal control, risk management, and review of compliance with laws and regulations, and is considered as part of the assurance practitioner’s assessment of risk.
An effective internal audit function may enable the assurance practitioner to modify the nature and/or timing, and/or reduce the extent of assurance procedures performed, but cannot eliminate them entirely.
Obtaining evidence on the suitability of the design of controls may be conducted simultaneously with gathering evidence on the description, implementation or operating effectiveness of those controls. The objectives of the engagement are not addressed in isolation so that when gathering evidence on the implementation or operating effectiveness of controls the assurance practitioner may also gain a greater understanding of the design of controls and identify additional or compensating controls relevant to the achievement of the control objectives.
In a direct engagement the assurance practitioner’s evaluation of controls and gathering of evidence to support an assurance conclusion on controls, is a single process which results in an assurance conclusion which is also the outcome of the assurance practitioner’s evaluation of the controls. Consequently, there is no separate outcome reported by the assurance practitioner in a direct engagement on controls.
An assurance engagement is an iterative process, and information may come to the assurance practitioner’s attention that differs significantly from that on which the determination of planned procedures was based. As the assurance practitioner performs planned procedures, the evidence obtained may cause the assurance practitioner to perform additional procedures. In the case of an attestation engagement, such procedures may include asking the responsible party to examine the matter identified by the assurance practitioner, and to make amendments to the description or Statement, if appropriate.
The assurance practitioner may become aware of a matter(s) that causes the assurance practitioner to believe that the controls may not be suitably designed, the description may be materially misstated, the controls may not be implemented as designed or operating effectively. In such cases, the assurance practitioner may investigate such differences by, for example, inquiring of the appropriate party(ies) or performing other procedures as appropriate in the circumstances.
The level of assurance obtained in a limited assurance engagement is lower than in a reasonable assurance engagement, therefore the procedures the assurance practitioner performs in a limited assurance engagement are different in nature and timing from, and are less in extent than for, a reasonable assurance engagement. The primary differences between the assurance practitioner’s overall responses to assessed risks and further procedures conducted in a reasonable assurance engagement and a limited assurance engagement on controls include:
- the emphasis placed on the nature of various procedures as a source of evidence will likely differ, depending on the engagement circumstances. For example, the assurance practitioner may judge it to be appropriate in the circumstances of a particular limited assurance engagement to place relatively greater emphasis on indirect testing of controls, such as enquiries of the entity’s personnel, and relatively less emphasis, on direct testing of controls, such as observation, re-performance or inspection, than would may be the case for a reasonable assurance engagement.
- in a limited assurance engagement, the further procedures performed are less in extent than in a reasonable assurance engagement in that those procedures may involve:
- selecting fewer items for examination;
- performing fewer types of procedures; or
- performing procedures at fewer locations.
In evaluating whether a control is suitably designed, either individually or in combination with other controls, to achieve the related control objectives, the assurance practitioner may use flowcharts, questionnaires or decision tables to facilitate understanding the design of the controls.
Controls are directed at preventing, detecting or correcting a failure to achieve a control objective, whether due to fraud or error. Controls may consist of a number of activities directed at the achievement of a control objective. Consequently, if the assurance practitioner evaluates certain activities as being ineffective in achieving a particular control objective, the existence of other activities may allow the assurance practitioner to conclude that controls related to the control objective are suitably designed.
The assurance practitioner’s evaluation of the design of the controls includes procedures to assess whether the controls as designed would, individually or in combination with other controls, mitigate the risks which threaten achievement of the identified control objectives, by preventing or detecting and correcting failures to achieve a control objective. These procedures may include:
- Enquiries of management and staff regarding the operation of controls and the types of errors or failures that have occurred or may occur.
- Consideration of flowcharts, questionnaires, decision tables or system descriptions to understand the design.
- Inspection of documents evidencing prevention, detection or correction of failures to achieve a control objective.
When evaluating the suitability of the design of controls to prevent, detect or correct fraud, the assurance practitioner considers whether the following fraud risk factors are adequately mitigated by the designed controls:
- any incentives or pressures to commit fraud, such as performance targets, shareholder/investor expectations, results based remuneration or bonuses, reporting or liability thresholds or individual circumstances (such as gambling or personal debts);
- perceived opportunities to do so, such as individuals holding a position of trust or inadequate controls; and
- any possible rationalisations for doing so, such as underpaid, overworked or otherwise disgruntled employees.
Controls can mitigate but not eliminate the risk of fraud, which may threaten achievement of the identified control objectives. In evaluating the suitability of the design of controls, the assurance practitioner considers whether the controls mitigate the risk of fraud perpetrated by way of:
- manipulation, falsification (including forgery) or alteration of records or supporting documentation;
- misrepresentation in, or intentional omission from records or reports, relevant events, activities, transactions or other significant information;
- intentional misapplication of criteria relating to the measurement or quantification of amounts, classification, manner of presentation or disclosure; or
- misappropriation of assets or rights through diversion, stealing, false claims or unauthorised personal use.
Suitably designed controls may be undermined by deficiencies in other components of control or other competing factors within the entity which the assurance practitioner may need to consider. These risks may be addressed through indirect controls, if so these controls may need to be considered in evaluating the suitability of the design of controls.
When evaluating the suitability of the design of controls the assurance practitioner may identify controls which are either included in the design but omitted from the description or included in the description but are ineffective in achieving the control objectives. Where that description is available to users, the assurance practitioner follows the requirements of paragraphs 65 and 66 and clearly identifies the controls to which the conclusion on the design relates.
In a reasonable assurance engagement, when obtaining an understanding of the control environment and considering other components of controls, not included in the scope of the engagement, the assurance practitioner may consider, for example: the tone at the top, extent of management override, the policies regarding recruitment and training of suitably qualified and competent staff and access controls for IT systems. These controls may fall within other components of control not being directly tested, but which may undermine the design, implementation or effective operation of the controls included in the scope of the engagement.
In obtaining evidence as to whether those aspects of the description included in the scope of the engagement are fairly presented in all material respects, the assurance practitioner determines whether:
- The description addresses the major aspects of the system, being the function or service provided that could reasonably be expected to be relevant to the expected users.
- The description is prepared at a level of detail that provides for the needs of users as reflected in the purpose of the engagement, however, if the description is going to be distributed outside of the entity, it need not be so detailed as to potentially allow a reader to compromise security or other controls at the entity.
- The description accurately reflects the controls as designed and, if included in the scope of the engagement, implemented, which relate to each of the control objectives identified and does not omit or distort information.
- The description identifies any functions or services subject to the engagement which are outsourced to a third party and whether the inclusive or carve-out method has been used with respect to the controls operating at the third party relevant to the control objectives included in the scope of the engagement. If the inclusive method has been used, whether the description clearly distinguishes the controls operating at the entity from the controls operating at the third party.
An example of a description of the system is contained in Appendix 7, example 2.
In obtaining evidence as to whether complementary user-entity or client controls included in the description are adequately described, the assurance practitioner may:
- compare the information in the description to contracts with user entities;
- compare the information in the description to system or procedure manuals; and
- make enquiries of management and staff to gain an understanding of the user entity’s responsibilities regarding achieving the control objectives and whether those responsibilities are adequately described.
The assurance practitioner’s evaluation of the description may be performed in conjunction with procedures to obtain an understanding of that system. These procedures may include:
- Enquiries of management and staff including, where the scope of the engagement is over a period, specific enquiries about changes in controls that were designed or implemented during the period.
- Observing procedures performed by the entity’s personnel.
- Reviewing the entity’s policy and procedures manuals and other systems documentation, for example, flowcharts and narratives.
- Reviewing documentary evidence as to the manner in which the controls were implemented.
- Walk‑through of control procedures or tracing items through the entity’s system.
If a control is suitably designed, the assurance practitioner determines, if included in the scope of the engagement, whether the control is implemented by assessing that the implementation process has been carried out so that the control can operate effectively as designed. Implementation is a process, the completion of which can usually be tested on or after the delivery date, although in some cases it may need to be tested during the implementation process if evidence is not available once the control is in place. The nature of the procedures selected by the assurance practitioner to test implementation of controls will depend on the characteristics of the system within which the controls are designed to operate, the processes by which the controls are implemented and the sources of evidence available regarding implementation.
The effective implementation of controls, which enables those controls to operate effectively once they are delivered and in operation, usually involves a number of processes which may include:
- Documentation of controls.
- Development of manuals, instructions and policies for users/operators.
- Allocation of responsibility for operation of each control and procurement or reallocation of human resources to operate and monitor those controls.
- Communication with and training of users/operators in the control methodology and related technology.
- Development or acquisition of IT systems and/or data storage.
- Procurement of outsourced IT services under a service level agreement which specifies controls required to meet the system design.
- Installation, configuration and testing of IT systems and/or data storage.
- Acquisition and installation of equipment, IT hardware, physical security and other infrastructure.
- Establishment of backup for operation of controls in the event of disaster or system failure, such as power outage, infrastructure failure or IT system failure, or routine events, such as staff absences.
Obtaining Evidence Regarding Operating Effectiveness of Controls (Ref: Para. 56-62)
Assessing Operating Effectiveness
If a control is suitably designed the assurance practitioner determines, if included in the scope of the engagement, whether the control is operating effectively by assessing if it operated throughout the period as designed, in all material respects. If suitably designed and operating effectively, a control, individually or in combination with other controls, achieves the related control objectives in all material respects. When the engagement includes operating effectiveness, implementation does not need to be separately tested or concluded upon, as the purpose of effective implementation of a control is that the control will operate effectively.
Evidence about the operation of material controls in prior periods cannot be used as evidence of operating effectiveness of those controls in the current period, however it may be useful in understanding the entity and its environment to identify risks based on past deviations in the operation of controls when planning the engagement. Controls are material to the engagement either when they are themselves to be concluded on in the assurance report or they are material to achieving the control objectives to be concluded on in the assurance report. Controls which are not material to the assurance report conclusion may be tested by rotation for an on-going engagement on controls, in combination with walk-through tests to identify any changes which have occurred to those controls. For example, a three year cycle for the rotation of immaterial controls may be appropriate.
In a limited assurance engagement, ASAE 3000 requires the assurance practitioner to identify areas where a material misstatement of the subject matter information is likely to arise. However, in a limited assurance engagement on controls, the assurance practitioner assesses the risks of material deviations in the operating effectiveness of controls, as the requirement in ASAE 3000 cannot be readily interpreted for a controls engagement and may not result in a meaningful conclusion.
The nature of a control procedure often influences the nature of tests of operating effectiveness that can be performed. For example, the assurance practitioner may examine evidence regarding controls where such evidence exists, however documentary evidence regarding some controls often does not exist. In these circumstances, the tests of operating effectiveness may consist of enquiry and observation only. However there is a risk that the control may be triggered by the enquiry and observation and may not operate at other times during the period. Therefore, the assurance practitioner would, in conjunction with those procedures, seek to obtain other supporting evidence by looking to the outcomes from the system, for example substantive testing of the accuracy of the information over which the controls operate.
The decision about what comprises sufficient appropriate evidence is a matter of professional judgement. The assurance practitioner may consider for example:
- the nature of the system;
- the significance of the control procedure in achieving the relevant objective(s);
- the nature and extent of any tests of operating effectiveness performed by the entity in monitoring controls (management, internal audit function or other personnel); and
- the likelihood that the control procedure will not reduce to an acceptably low level the risks relevant to the objective(s). This may involve consideration of:
- the design effectiveness of the control;
- changes in the volume or nature of transactions that might affect design or operating effectiveness (for example, an increase in the volume of transactions may make it tedious to identify and correct errors thereby creating a disincentive to perform the control among entity personnel);
- whether there have been any changes in the control procedure (personnel may not be aware of the change or may not understand the way it operates thus inhibiting effective implementation);
- the interdependence of the control upon other controls (for example the design of controls associated with the cash receipts function may be assessed as effective however their operating effectiveness may be poor due to a lack of segregation of duties);
- changes in key personnel who are responsible for performing the control or monitoring its performance (this may result in insufficient knowledge about how the control should operate or lack of awareness of their responsibilities with respect to the control);
- whether the control is manual or automated and the significance of the information system’s general controls (manual controls may allow a greater degree of override in a weak control environment, whereas adequately tested IT controls will consistently perform a function based on agreed specifications);
- the complexity of the control (a complex procedure may promote noncompliance if personnel are not adequately trained in the operation of the procedure);
- environmental factors which may influence compliance with the control (employees may circumvent controls when they are time consuming and formal or informal performance assessment relates to speed or throughput);
- whether more than one control achieves the same objective (the assessment of a procedure as ineffective would not necessarily preclude its objective from being achieved as other procedures that are pervasive in nature may address this objective); and
- whether there have been any changes in the processes adopted by an entity (for example, a change in a process may render a particular control procedure ineffective).
Obtaining an understanding of controls sufficient to conclude on the suitability of their design is not sufficient evidence regarding their operating effectiveness, unless there is some automation that provides for the consistent operation of the controls as they were designed and implemented. For example, obtaining information about the implementation of a manual control at a point in time does not provide evidence about operation of the control at other times. However, because of the inherent consistency of IT processing, performing procedures to determine the design of an automated control, and whether it has been implemented, may serve as evidence of that control’s operating effectiveness. Whether reliance can be placed on the consistent operation of an automated control will depend on the assurance practitioner’s assessment and testing of other controls, such as general IT controls, including those over program changes and system access.
To be useful to users and not be potentially misleading, an assurance report on operating effectiveness over a period ordinarily covers a minimum period of six months. The assurance practitioner considers the reasons for a shorter period being selected by the engaging party and whether sufficient instances of the control will be triggered during that period and if there is any indication of bias in the period selected which may avoid possible deviations. If a period of less than six months is justifiable, the assurance practitioner may consider it appropriate to describe the reasons for the period chosen in the assurance practitioner’s assurance report. Circumstances that may result in a report covering a period of less than six months include when:
- the assurance practitioner is engaged close to the date by which the report on controls is to be issued;
- the system of controls has been in operation for less than six months; or
- significant changes have been made to the system of controls and it is not practicable either to wait six months before issuing a report or to issue a report covering the system both before and after the changes.
Certain control procedures may not leave evidence of their operation that can be tested at a later date and, accordingly, the assurance practitioner may find it necessary to test the operating effectiveness of such control procedures at various times throughout the reporting period.
If the assurance practitioner provides a conclusion on the operating effectiveness of controls, that conclusion relates to the operation of controls throughout each period, therefore, sufficient appropriate evidence about the operation of controls during the current period is required for the assurance practitioner to express that conclusion. Knowledge of deviations observed in prior engagements may, however, lead the assurance practitioner to increase the extent of testing during the current period.
Evidence of the operating effectiveness of a control subsequent to period end, for a control which did not operate during the period as it was not triggered, may be used in combination with evidence that the circumstances necessary to trigger the control during the period did not arise and those circumstances were adequately monitored.
In some circumstances, it may be necessary to obtain evidence supporting the effective operation of indirect controls. Controls over the accuracy of the information in exception reports (for example, the general IT controls) are described as “indirect” controls. For example because of the inherent consistency of IT processing, evidence about the implementation of an automated process control, when considered in combination with evidence about the operating effectiveness of the entity’s indirect general IT controls (in particular, change controls), may also provide substantial evidence about its operating effectiveness.
The means of selecting items for testing available to the assurance practitioner are:
- Selecting all items (100% examination): This may be appropriate for testing controls that are applied infrequently, for example, quarterly, or when evidence regarding application of the control makes 100% examination efficient;
- Selecting specific items: This may be appropriate where 100% examination would not be efficient and sampling would not be effective, such as testing controls that are not applied sufficiently frequently to render a large population for sampling, for example, controls that are applied monthly or weekly; and
- Sampling: This enables the assurance practitioner to obtain evidence about the items selected in order to form a conclusion about the whole population from which the sample is drawn. Sampling may be appropriate for testing controls that are applied frequently in a uniform manner and which leave documentary evidence of their application.
While selective examination of specific items will often be an efficient means of obtaining evidence, it does not constitute sampling. The results of procedures applied to items selected in this way cannot be projected to the entire population; accordingly, selective examination of specific items does not provide evidence concerning the remainder of the population. Sampling, on the other hand, is designed to enable conclusions to be drawn about an entire population on the basis of testing a sample drawn from it.
When designing a controls sample for testing operating effectiveness of controls, the assurance practitioner considers the specific purpose to be achieved and the combination of assurance procedures that is likely to best achieve that purpose, including determining:
- What constitutes a deviation.
- The characteristics of the population to use for sampling and whether that population is complete.
- Whether statistical or non-statistical sampling is to be applied.
- Whether stratification or value-weighted selection is appropriate.
- The sample size based on the level of sampling risk which the assurance practitioner will tolerate.
In considering the characteristics of a population, the assurance practitioner makes an assessment of the expected rate of deviation based on the assurance practitioner’s understanding of the relevant controls or on the examination of a small number of items from the population. This assessment is made in order to design a sample and to determine the sample size.
With statistical sampling, sample items are selected in a way that each sampling unit has a known probability of being selected. With non-statistical sampling, judgement is used to select sample items. Because the purpose of sampling is to provide a reasonable basis for the assurance practitioner to draw conclusions about the population from which the sample is selected, it is important that the assurance practitioner selects a representative sample, so that bias is avoided, by choosing sample items which have characteristics typical of the population. The principal methods of selecting samples are the use of random selection, systematic selection and haphazard selection.
Efficiency may be improved if the assurance practitioner stratifies a population by dividing it into discrete sub-populations which have an identifying characteristic. The objective of stratification is to reduce the variability of items within each stratum and therefore allow sample size to be reduced without increasing sampling risk. Controls in a population may be stratified by characteristics, such as the level of approval required, the value or volume of the underlying data, the frequency of the control’s application or the complexity of the control’s application.
Auditing Standard ASA 530 Audit Sampling can be used as further guidance on sampling and sample selection methods.
The deviation rate for the sample of controls tested is also the projected deviation rate for the whole population. The closer the projected deviation rate for a control not operating effectively is to the tolerable rate of deviation, the more likely that actual deviations in the population may exceed tolerable deviations. Also, if the projected deviation rate is greater than the assurance practitioner’s expectation of the deviation rate used to determine the sample size, the assurance practitioner may conclude that there is an unacceptable sampling risk that the actual deviations in the population exceed the tolerable deviations. If controls have been divided into strata, the deviation rate applies only to that stratum separately. Projected deviations for each stratum are then combined when considering the possible effect of deviations on the whole population.
Considering the results of other procedures helps the assurance practitioner to assess the risk that actual deviations in the operating effectiveness of controls in the population exceeds tolerable deviations, and the risk may be reduced if additional evidence is obtained. The assurance practitioner might extend the sample size or, unless the controls themselves are being concluded upon (such as when controls are specified by the responsible party or legislation) rather than the control objectives, test alternative or mitigating controls.
- The significance of a deviation or a combination of deviations in the operating effectiveness of a control depends on whether the related control objective was not or is likely to not be achieved as a result and the materiality of the impact of the control objective not being achieved on the assurance practitioner’s conclusion.
- Examples of matters that the assurance practitioner may consider in determining whether a deviation or combination of deviations in the operating effectiveness of controls is material include:
- The likelihood of the deviation/s leading to a material control objective not being achieved.
- The susceptibility to loss or fraud of the underlying subject matter to which the control applies.
- The subjectivity and complexity of determining estimated amounts.
- The monetary value of items exposed to the control deviations.
- The volume of activity that has been exposed or could be exposed to the control deviations.
- The importance of the controls to the system and the control objectives; for example:
- General monitoring controls (such as oversight of management).
- Controls over the prevention and detection of fraud.
- Controls over the selection and application of significant accounting or measurement policies.
- Controls over significant transactions or activity with related parties.
- Controls over significant transactions or activity outside the entity’s normal course of business.
- Controls over the period-end adjustments.
- The cause and frequency of the exceptions detected as a result of the deviations in the controls.
- The interaction of the deviation with other deviations in internal control.
In responding to fraud or suspected fraud identified during the engagement, it may be appropriate for the assurance practitioner to, for example:
- discuss the matter with the appropriate level of management;
- request management to consult with an appropriately qualified third party, such as the entity’s legal counsel or a regulator;
- consider the implications of the matter in relation to other aspects of the engagement, including the assurance practitioner’s risk assessment and the reliability of written representations from the entity;
- obtain legal advice about the consequences of different courses of action;
- communicate with third parties (for example, a regulator);
- withhold the assurance report; or
- withdraw from the engagement.
ASAE 3000 provides application material for the circumstances where an assurance practitioner’s expert is involved in the engagement. This material may also be used as helpful guidance when using the work of another assurance practitioner or a responsible party’s or evaluator’s expert.
Work Performed by Another Assurance Practitioner or a Responsible Party’s or Evaluator’s Expert
(Ref: Para. 76)
The design, description, implementation or operation of an entity’s controls may require specialist expertise, such as IT for security and access controls to the IT systems or engineering expertise for calibration of instruments or machinery for measurement of energy usage or production as a basis for controls over completeness of emissions estimations. The necessary experts may be engaged or employed by the entity’s management and failure to do so when such expertise is necessary increases the risks of a deficiency in the design, a misstatement in the description, deficiency in the implementation or deviation in operation of the controls.
When information on controls to be used as evidence has been prepared using the work of a responsible party’s or evaluator’s expert, the nature, timing and extent of procedures with respect to the work of the responsible party’s or evaluator’s expert may be affected by such matters as:
- the nature and complexity of the controls to which the expert’s work relates;
- the risks of a material deficiency in the design, deficiency in implementation or deviation in operating effectiveness of relevant controls;
- the availability of alternative sources of evidence or mitigating controls;
- the nature, scope and objectives of the expert’s work;
- whether the expert is employed by the entity, or is a party engaged by it to provide relevant services;
- the extent to which responsible party or evaluator can exercise control or influence over the work of the expert;
- whether the expert is subject to technical performance standards or other professional or industry requirements;
- the nature and extent of any controls within the entity over the expert’s work;
- the assurance practitioner’s knowledge and experience of the expert’s field of expertise; and
- the assurance practitioner’s previous experience of the work of that expert.
The nature, timing and extent of the assurance practitioner’s procedures on specific work of the internal auditors will depend on the assurance practitioner’s assessment of the significance of that work to the assurance practitioner’s conclusions (for example, the significance of the risks that the controls tested seek to mitigate), the evaluation of the internal audit function and the evaluation of the specific work of the internal auditors. Such procedures may include:
- examination of evidence of the operation of controls already examined by the internal auditors;
- examination of evidence of the operation of other instances of the same controls;
- examination of the outcomes of monitoring of controls by internal auditors; and
- observation of procedures performed by the internal auditors.
Irrespective of the degree of autonomy and objectivity of the internal audit function, such a function is not independent of the entity as is required of the assurance practitioner when performing the engagement. The assurance practitioner has sole responsibility for the conclusion expressed in the assurance report, and that responsibility is not reduced by the assurance practitioner’s use of the work of the internal auditors.
The person(s) from whom the assurance practitioner requests written representations will ordinarily be a member of senior management or those charged with governance. However, because management and governance structures vary by jurisdiction and by entity, reflecting influences such as different cultural and legal backgrounds, and size and ownership characteristics, it is not possible for this ASAE to specify for all engagements the appropriate person(s) from whom to request written representations. The process to identify the appropriate person(s) from whom to request written representations requires the exercise of professional judgement.
Examples of written representations in the form of representation letters are contained in Appendix 6.
Assurance procedures with respect to the identification of subsequent events after period end are limited to examination of relevant reports, for example reports on control procedures, minutes of relevant committees and enquiry of management or other personnel as to significant non-compliance with control procedures.
The matters identified may provide:
- additional evidence or reveal for the first time conditions that existed during the period on which the assurance practitioner is reporting; or
- evidence about conditions that existed subsequent to the period on which the assurance practitioner is reporting that may significantly affect the operation of the control procedures.
In the circumstances described in paragraph A120(a), the assurance practitioner reassesses any conclusions previously formed that are likely to be affected by the additional evidence obtained.
In the circumstances described in paragraph A120(b) when the assurance practitioner’s report has not already been issued:
- in an attestation engagement, the assurance practitioner:
- includes an Emphasis of Matter where the responsible party’s Statement is available to users and adequately discloses the subsequent event; or
- issues a qualified conclusion if the responsible party’s Statement is available to users and does not adequately disclose the subsequent event; and
- in a direct engagement, the assurance practitioner includes a paragraph in the assurance report headed “Subsequent Events” describing the events and indicating that the subsequent events do not impact the assurance conclusion but they may affect the future effectiveness of the control procedures.
The assurance practitioner does not have any responsibility to perform procedures or make any enquiry after the date of the report. If however, after the date of the report, the assurance practitioner becomes aware of a matter identified in paragraph A120, the assurance practitioner considers re-issuing the report. In an attestation engagement where the report has already been issued, the new report includes an Emphasis of Matter discussing the reason for the new report. In a direct engagement, the new report discusses the reason for the new report under a heading “Subsequent Events”.
Relevant ethical requirements require that an assurance practitioner not be associated with information where the assurance practitioner believes that the information:
- contains a materially false or misleading statement;
- contains statements or information furnished recklessly; or
- omits or obscures information required to be included where such omission or obscurity would be misleading.
If other information included in a document containing the assurance practitioner’s report includes future‑oriented information such as recovery or contingency plans, or plans for modifications to the system that will address deficiencies or deviations identified in the assurance practitioner’s report, or claims of a promotional nature that cannot be reasonably substantiated, the assurance practitioner may request that information be removed or restated.
Scrutiny of documents containing the assurance practitioner’s report which is to be made publicly available is more critical than reports to be distributed internally within the responsible party or amongst other users who are knowledgeable about the circumstances of the engagement.
Control consists of a number of integrated processes directed at the achievement of specific control objectives, which together contribute to the achievement of overall objectives. The scope of the assurance practitioner’s engagement may be centred on the achievement of overall objectives or may go to the level of specific objectives. Some controls may have a pervasive effect on achieving many overall objectives, whereas others are designed to achieve a specific objective. Because of the pervasive nature of some controls, the assurance practitioner may find several controls that affect the risks relevant to a particular objective. Consequently, when the assurance practitioner evaluates a control as being unsuitably designed, not implemented as designed or operating ineffectively to achieve a specific objective the assurance practitioner does not, on this basis alone, conclude that that objective will not be achieved. The assurance practitioner will also need to consider the effect of this evaluation on the operation of other related controls and identify any compensating controls which may mitigate the ineffective control, in order to determine the effect of the ineffective control on the assurance practitioner’s conclusion.
In assessing the impact of uncorrected deficiencies in the design, misstatements in the description, deficiencies in the implementation or deviations in operating effectiveness of controls, the assurance practitioner considers the impact of those matters on each other. For example, controls may still be suitably designed, implemented as designed and operating effectively, even if the description is materially misstated and does not appropriately reflect the controls as designed. However, if the design of controls is unsuitable, the assurance practitioner does not test the implementation or operating effectiveness of those unsuitable controls and the assurance practitioner’s conclusion on implementation or operating effectiveness relates only to the controls which are suitably designed.
A statement of the limitations of controls in the assurance report states that:
- because of inherent limitations in any system, it is possible that fraud, error, or non-compliance with laws and regulations may occur and not be detected. Further, the system, within which the control procedures that have been assured operate, has not been assured and no conclusion is provided as to its effectiveness;
- a reasonable/limited assurance engagement, which includes operating effectiveness of controls, is not designed to detect all instances of controls operating ineffectively as it is not performed continuously throughout the period and the tests performed on the control procedures are on a sample basis; and
- any projection of the outcome of the evaluation of the controls to future periods is subject to the risk that the procedures may become inadequate because of changes in conditions, or that the degree of compliance with them may deteriorate.
The assurance practitioner may expand the report to include other information not intended as a qualification of the assurance practitioner’s conclusion. If the report includes other information it is a long-form report as the information is additional to the basic elements required in paragraph 89 for a short-form report. This additional information may be required by regulation or agreed in the terms of engagement to meet the needs of users. When considering whether to include any such information the assurance practitioner assesses the materiality of that information in the context of the objectives of the engagement. Other information is not to be worded in such a manner that it may be regarded as a qualification of the assurance practitioner’s conclusion and may include for example:
- A description of the facts and findings relating to particular aspects of the engagement.
- The specific control objectives, related controls, the tests of controls that were performed and the results of those tests.
- Recommendations for improvements to address identified control design deficiencies, implementation deficiencies or deviations in operating effectiveness.
- Control deficiencies or deviations not considered significant because the cost of the control exceeds the benefit.
If the terms of the engagement require the results of the tests of controls to be reported, then the assurance practitioner, in describing the tests of controls, clearly states which controls were tested, identifies whether the items tested represent all or a selection of the items in the population, and indicates the nature of the tests in sufficient detail to be useful to users. If deviations have been identified, the assurance practitioner includes the extent of testing performed that led to identification of the deviations (including the sample size where sampling has been used), and the number and nature of the deviations noted. The assurance practitioner reports deviations even if, on the basis of tests performed, the assurance practitioner has concluded that the related control objective was achieved.
If the criteria are adequately described in a source that is readily accessible to the intended users of the assurance practitioner’s report, the assurance practitioner may identify those criteria by reference, rather than by repetition in the assurance practitioner’s report or an appendix to the report, for example, if the criteria are published and generally available, or if they are detailed in a description of the system. The controls designed to achieve the controls objectives, as criteria for implementation or operating effectiveness of controls, are not usually detailed in the assurance report, unless set out in the description of the system. As the control objectives provide the criteria for evaluation of the design of controls, against which implementation or operating effectiveness are then evaluated, the control objectives also provide the criteria for the controls engagement as a whole. Consequently, in making the criteria available to users it is usually sufficient for the control objectives to be identified.
In some cases the control objectives used to assess the controls may be identified for a specific purpose. For example, a regulator may require certain entities to use particular criteria designed for regulatory purposes. To avoid misunderstandings, the assurance practitioner alerts users of the assurance report to this fact and that, therefore, the description of controls may not be suitable for another purpose.
The assurance practitioner may consider it appropriate to indicate that the assurance report is intended solely for specific users. Depending on the engagement circumstances, for example, the law or regulation of the particular jurisdiction, this may be achieved by restricting the distribution or use of the assurance report. While an assurance report may be restricted in this way, the absence of a restriction regarding a particular user or purpose does not itself indicate that a legal responsibility is owed by the assurance practitioner in relation to that user or for that purpose. Whether a legal responsibility is owed will depend on the legal circumstances of each case and the relevant jurisdiction.
The summary of the work performed helps the intended users understand the nature of the assurance conveyed by the assurance report. For many assurance engagements, infinite variations in procedures are possible in theory. It may be appropriate to include in the summary a statement that the work performed included evaluating the suitability of the control objectives and the risks that threaten achievement of those objectives.
In a limited assurance engagement an appreciation of the nature, timing, and extent of procedures performed is essential to understanding the assurance conveyed by the conclusion, therefore the summary of the work performed is ordinarily more detailed than for a reasonable assurance engagement and identifies the limitations on the nature, timing, and extent of procedures. It also may be appropriate to indicate certain procedures that were not performed that would ordinarily be performed in a reasonable assurance engagement. However, a complete identification of all such procedures may not be possible because the assurance practitioner’s required understanding and consideration of engagement risk is less than in a reasonable assurance engagement.
Factors to consider in determining the level of detail to be provided in the summary of the work performed include:
- circumstances specific to the entity (e.g. the differing nature of the entity’s control environment compared to those typical in the sector);
- specific engagement circumstances affecting the nature and extent of the procedures performed; and
- the intended users’ expectations of the level of detail to be provided in the report, based on market practice, or applicable law or regulation.
It is important that the summary be written in an objective way that allows intended users to understand the work done as the basis for the assurance practitioner’s conclusion. In most cases this will not involve relating the entire work plan, but on the other hand it is important for it not to be so summarised as to be ambiguous, nor written in a way that is overstated or embellished.
Illustrative examples of assurance practitioner’s reports are contained in Appendix 8.
If the assurance practitioner’s report on controls has been prepared for a specific purpose and is only relevant to the intended users, this is stated in the assurance practitioner’s report. In addition, the assurance practitioner may consider it appropriate to include wording that specifically restricts distribution of the assurance report other than to intended users, its use by others, or its use for other purposes.
Modifications to the assurance report may be made in the following circumstances:
- a qualified conclusion may be issued if the following matters are material but not pervasive:
- unsuitable criteria mandated by legislation or regulation;
- scope limitation;
- deficiency in the design of controls to achieve each material control objective;
- misstatement in the description;
- deficiency in the implementation of controls as designed; or
- deviation in the operating effectiveness of controls.
- an adverse conclusion may be issued if the following matters are both material and pervasive:
- unsuitable criteria mandated by legislation or regulation;
- deficiency in the design of controls to achieve the control objectives;
- misstatement in the description;
- deficiency in the implementation of controls as designed; or
- deviation in the operating effectiveness of controls.
- a disclaimer may be issued if there is a limitation of scope which is both material and pervasive.
Examples of matters which the assurance practitioner may assess as both material and pervasive and warrant an adverse conclusion include:
- deficiencies in the design of controls which result in the controls being unsuitable to achieve a significant proportion of the control objectives in the scope of the engagement, for which no, or insufficient, suitably designed compensating controls exist;
- deficiencies in the implementation of controls so that they will not be able to operate as designed which may or will result in a significant proportion of the control objectives in the scope of the engagement not being achieved when the controls are in operation; or
- deviations in the operating effectiveness of controls which may or do result in a significant proportion of the control objectives in the scope of the engagement not being achieved, for which no, or insufficient, suitably designed compensating controls exist.
Typically, misstatements in the description, however extensive, alone do not result in an adverse conclusion. If controls have been designed suitably to achieve the control objectives, but those controls are not presented fairly or are misstated in the description, if the entity is able to change that description it would be appropriate to do so. If the entity declines or is unable to amend the description, the assurance conclusion is qualified with respect to the description, however the controls which would achieve the control objectives can be identified in the assurance report and their design, implementation or operating effectiveness are able to be assured.
Each control objective is considered individually and in combination with other objectives to assess the impact on the assurance report. Deficiencies in the design, implementation or operating effectiveness of controls to achieve an individual control objective may result in a qualification if that control objective is material to the system that is subject to the engagement.
Whenever the assurance practitioner expresses a qualified conclusion, the assurance practitioner’s report includes a clear description of all the substantive reasons therefor, and:
- a description of the effect of all identified matters on the residual risk of not achieving relevant control objectives; or
- if the assurance practitioner is unable to reliably determine the effect of a matter, a statement to that effect.
Illustrative examples of elements of modified assurance practitioner’s reports are contained in Appendix 9.
Even if the assurance practitioner has expressed an adverse conclusion or a disclaimer of conclusion, it may be appropriate to describe in the basis for modification paragraph the reasons for any other matters of which the assurance practitioner is aware that would have required a modification to the conclusion, and the effects thereof.
When expressing a disclaimer of conclusion, because of a scope limitation, it is not ordinarily appropriate to identify the procedures that were performed nor include statements describing the characteristics of the assurance practitioner’s engagement; to do so might overshadow the disclaimer of conclusion.
Appropriate actions to respond to the circumstances identified in paragraph 96 may include:
- Obtaining legal advice about the consequences of different courses of action.
- Communicating with those charged with governance of the entity.
- Communicating with third parties (for example, a regulator) when required to do so.
- Modifying the assurance practitioner’s conclusion, or adding an Other Matter paragraph.
- Withdrawing from the engagement.
Certain matters identified during the course of the engagement may be of such importance that they would be communicated to those charged with governance. Unless stated otherwise in the terms of engagement, less important matters would be reported to a level of management that has the authority to take appropriate action.
Example Responsible Party’s Statement on Controls and System Description