The findings suggest that auditors need improvement in the use of NFMs when performing substantive analytical procedures. Also, the findings of this study suggest that a relatively simple and efficient prompt regarding the use of NFMs can improve auditor substantive testing in the important area of revenue recognition. The evidence suggests that auditors are more likely to respond appropriately to a prompt when fraud risk is assessed at high levels. This demonstrates that decision-makers should carefully assess the level of fraud risk that will result in the desired behavior from in-charge senior auditors.
For more information on this study, please contact Joe Brazel (jfbrazel@ncsu.edu).
Brazel, J. F., K. L. Jones, and D. F. Prawitt. 2014. Auditors' Reactions to Inconsistencies between Financial and Nonfinancial Measures: The Interactive Effects of Fraud Risk Assessment and a Decision Prompt. Behavioral Research in Accounting 26 (1): 131-156.
Professional standards, auditing texts, and prior research suggest that external auditors can use nonfinancial measures (NFMs) to verify their clients’ reported financial information. These sources also suggest that an inconsistency between a company’s financial performance and related NFMs represents a potential red flag for financial statement fraud. However, recent research indicates that auditors’ attention to NFMs is insufficient to detect inconsistencies between financial data and NFMs. This paper addresses this concern by investigating factors that affect auditors’ use of NFMs when auditing financial statement data. Specifically, the paper investigates whether auditors’ reliance on NFMs and development of revenue expectations are affected by the following factors:
The authors motivate their hypotheses using the Heuristic-Systematic Model from the psychology literature. This model suggests that the contextual features of a judgment affect how an individual processes information. The authors use this theory to suggest that auditors who are prompted to use NFMs might be more likely to use NFMs to set revenue expectations under high fraud risk compared to low fraud risk.
The research evidence used in this study was gathered in 2009. In this study, the authors use in-charge senior auditors from a Big 4 firm to complete two experimental tasks. In both experiments, the participants were given access to client information and were asked to develop an expectation for a client’s revenue balance. The second experiment introduces an NFM prompt and manipulates fraud risk.
This paper is intended to prompt auditors to take advantage of easier access to population data in today’s digital business environment. By abandoning sampling auditors can develop much more sophisticated models of behavior that can identify anomalies in ways that were not possible before. Auditors can also be more creative in how they treat data, be it in aggregating it across organizational subunits or in larger and smaller time units. Most innovative of all, auditors and/or managers have the ability to continually update their expectation models by investigating errors and anomalies in real time and correcting them, so that the model is not based on flawed data. We find that such error correction greatly improves the accuracy of analytical procedures. Perhaps the most important finding, however, is that almost all the various expectation models we used gave similarly strong results which implies that what really matters is the size of the data set. Once auditors move away from sampling they will find that population data provides great statistical power when developing analytical procedures that reduces the reliance on finding just the right such procedure.
For more information on this study, please contact Alexander Kogan.
Kogan, A., M. Alles, M. Vasarhelyi and J. Wu. 2014. Design and Evaluation of a Continuous Data Level Auditing System. Auditing: A Journal of Practice and Theory. 33 (4): 221-245.
The purpose of this paper is to demonstrate how audit practice may change when auditors have access to real time population data, how to use real world data to develop APs for CA, and compare different analytical procedures in a CA context. In the paper we develop a framework for a continuous data level auditing system and uses a large sample of procurement data from a major health care provider to simulate an implementation of this framework. The first layer of the framework monitors compliance with deterministic business process rules and the second layer consists of analytical monitoring of business processes. A distinction is made between exceptions identified by the first layer and anomalies identified by the second one. The unique capability of continuous auditing to investigate (and possibly remediate) the identified anomalies in ‘‘pseudo-real time’’ (e.g., on a daily basis) is simulated and evaluated.
Our simulated implementation of the data-oriented CA system focuses on the procurement-related BPs and utilizes the data sets extracted from the data warehouse of a healthcare management business with many billions of dollars in assets and close to two hundred thousand employees. The data sets include all procurement cycle daily transactions from October 1st, 2003 through June 30th, 2004. The number of transaction records for each activity ranges from approximately 330,000 to 550,000. Since we have access to population data, the first step is to undertake tests of details to detect violations of key controls. Once that is done we turn to determining whether there are anomalies that do not violate any established controls but which may be nonetheless indicative of potential problems.
The implementation of the analytical procedure component of the CA system requires creation of the models of expected behavior to enable anomaly detection which we label “continuity equations” (CE). We use advanced statistical models to extract CE from the data, and then by seeding errors we determine how effectively the CE model identifies anomalies. We also investigate the effect of conducting AP on data aggregated in either time or geographically and also the implication of error correction.
Our research shows that when auditors have access to population data there can be significant changes in the role and sequence of audit procedures. Since data access is not a constraint, tests of detail can be carried out first on the complete population data to find exceptions to controls and for transaction verification. Then APs can be used, again, on the complete population data, to find anomalies. This paper shows that while there are differences in the predictive ability and detection performance of various CE models, all models perform reasonably well and no single model performs better on all aspects. From this two important conclusions can be drawn: First, the choice of a particular model across the candidate CE models is less important than the fact that all models yield fairly effective AP tests. Our second conclusion from the fact that all the CE models yield reasonably effective analytical procedures is that when auditors have access to complete transaction data, the richness of that disaggregate data combined with the reorganization of auditing workflow to implement pseudo-real time error correction makes BP problem detection robust across a variety of expectation models. In other words, it is the nature of the data that serves as audit evidence that is the primary driver of audit effectiveness, with the selection of the specific AP a second order concern—not because the audit benchmark is not important, but because auditing at the process level makes anomalies stand out much more obviously in the data.
Bedard’s (2006) discussion of Vandervelde (2006) reinforces the fact that auditors do incorporate the relationships among accounts in their responses to increases in misstatement risk. He also suggests that it is important to consider how this pattern maps to auditors’ risk assessments at the financial statement assertion level. His discussion emphasizes that in response to fee pressure, auditors may shift planned audit hours between accounts (i.e., from low risk areas to high risk areas), rather than increasing overall planned audit hours. Finally, despite Bedard’s (2006) caveat that this result could be due to auditor self-presentation concerns or a change in the mix of audit procedures that does not result in increased hours, it is important to note that auditors do not appear to reduce planned audit hours in response to fee pressure – and that this could reflect auditors’ cognizance of the heightened importance that investors and the market currently placed on the role of auditing.
Bedard, J. 2006. Discussion of: “The Importance of Account Relations when Responding to Interim Audit Testing Results”. Contemporary Accounting Research. 23(3): 823 – 831.
This study is a conference discussion of Vandervelde (2006). The purpose of the discussion is to critically analyze the motivation, hypotheses, experimental design, results, and implications of Vandervelde (2006). Please see the summary of Vandervelde (2006) for further details.
The discussant first reviews research on risk-based auditing. The discussant believes that Vandervelde (2006) is studying an important aspect of the audit by examining how auditors incorporate relationships between accounts in their audit testing. Regarding Vandervelde’s (2006) predictions, the discussant believes that Vandervelde’s (2006) hypotheses could more accurately reflect the mathematical model’s predictions. The following points illustrate the primary differences between the expectations in Vandervelde (2006) and Bedard (2006).
The discussant reviews and provides suggestions for Vandervelde’s (2006) motivation, hypotheses, experimental design, and results. The discussant also integrates Vandervelde (2006) in the context of prior research and suggests avenues for future research.
This experiment provides evidence that training in a systems perspective could help auditors analyze complex relationships between accounting data. This could be used to set appropriate analytics expectations and, more importantly, provide a credible way to determine whether management’s representations are well-grounded or not. This method also appears to require less mental effort to implement, since it moves the complicated relationship structure out of memory and onto a model. Given the added complexity of many estimates in today’s companies, systematic methods of processing information like a systems perspective may help to simplify the analysis of the estimates.
For more information on this study, please contact Billy Brewster.
Brewster, B. E. 2011. How a systems perspective improves knowledge acquisition and performance in analytical procedures. The Accounting Review 86 (3), 915-943.
Understanding complicated relationships with multiple links between information is difficult, as people have limited memory to keep all the relationships straight. This problem is evident in setting analytics expectations, as there are many reasons why accounting numbers change from year to year (and the reasons are often related to each other in varying, nonlinear ways). In order to avoid a “reductionist” perspective where pieces of information are considered in isolation and linearly, auditors may be able to construct a better mental model of the situation by using a “systems perspective”. This involves considering how all the parts of a system are related as well as their behavior from how they interact. Using a systems perspective (compared to a reductionist perspective) is predicted to be more accurate, more efficient, better able to detect management representations that are inconsistent with the evidence, and better able to integrate new information into their expectations accurately.
In an experiment conducted prior to 2008, undergraduate accounting students (juniors/seniors) are given training in evaluating stocks and flows (systems perspective) or business risks (reductionist perspective). They then learn about an audit client and its industry which has a particularly complicated relationship between multiple factors over time and the resulting product price. Using the technique they were taught, they then graph the product price over time. The students are then provided management’s estimate of the price and evaluate its credibility. Finally, the participants learn new information about the industry and are asked to factor it into their price evaluation.
For practice, the authors provide evidence about the relation between control deficiencies and substantive tests in the integrated audit. A significant minority of senior auditors attempt to identify bias in an accounting estimate with increased sampling from the biased estimation process, though they have been told that the estimation process is biased. The authors provide theory consistent empirical evidence that auditors often reach questionable, optimistic judgments about the capability of audit evidence to address control deficiencies. Auditors will often revert to what they know best, and it is difficult to get people to look beyond the familiar, regardless of experience level.
Mauldin, E. G., & Wolfe, C. J. 2014. How Do Auditors Address Control Deficiencies that Bias Accounting Estimates? Contemporary Accounting Research 31 (3): 658-680.
According to professional standards, auditors must integrate the internal control and financial statement audits. Revised risk assessment standards were issued, in part, to improve the integration of controls into the financial statement audit. However, PCAOB inspections find that auditors sometimes do not appropriately change the nature, timing, and/or extent of their substantive tests in response to clients’ internal controls. Auditors often have difficulty modifying substantive tests when responding to identified control deficiencies.
To shed light on the underlying reasons for this difficulty, the authors of this design a contextually rich experimental case and examine how auditors map a control deficiency into modifications of substantive tests. The authors examine control deficiencies that cause errors of omission in an estimation process, resulting in an incomplete and biased estimation process. The focus is on whether auditors recognize the insufficiency of reviewing the biased estimation process and how they select alternative tests to replace or supplement such review.
Eighty-seven auditors attending one Big 4 firm’s national training for experienced audit seniors participated in the study. The authors employ a between-participants experimental design with two treatments. The authors describe the treatments in sequence within the experimental task. They then randomly assign participants to experimental treatments and ask them to complete a case study. The evidence was collected prior to September of 2014.
The practical implication of this research for auditors is that it is best to avoid making initial hypotheses until after they obtain a comprehensive perspective of the data. Auditors should instead treat early stages of the decision process as a fact-finding exercise.
Luippold, B.L. and T.E. Kida. 2012. The Impact of Initial Information Ambiguity on the Accuracy of Analytical Review Judgments. Auditing: A Journal of Practice and Theory. (31) 2:113–129.
This study seeks to determine the extent to which initial information ambiguity affects analytical review judgments. That is, this paper examines whether the impact of initial information ambiguity persist even after the ambiguity is gone.
Around 2010 94 participants, who were mainly staff level auditors, participated in an experiment with a seeded error wherein they were required to perform preliminary analytical procedures. The participants were separated by condition into different levels of information ambiguity to perform preliminary analytical procedures and all were then given the full data to make a final judgment.
The main finding of this paper is that initial information ambiguity affects an auditor's ability to detect financial statement errors at the end of the analytical review process. Specifically, if auditors develop initial hypotheses using ambiguous information sets, they are less likely to identify errors causing fluctuations in financial data even after they search through all of the client's relevant information
The results of the study are important, as they demonstrate that relations among different financial statement accounts should be considered when examining how auditors respond to changes in risk of misstatements. Specifically, auditors do appear to respond to increases in the risk of material misstatement of one account by also increasing planned audit hours in related accounts. The results also highlight the fact that auditors’ responses to changes in audit risk are insensitive to fee pressure; specifically, the increase in budgeted audit hours when encountering a serious misstatement is similar whether fee pressure is low or high. Moreover, the author suggests that the concept of relatedness of accounts explored in this paper could be extended to tests of internal controls – for example, information about the effectiveness of one internal control could be informative about strength or importance of related internal controls
Vandervelde, S. 2006. The Importance of Account Relations when Responding to Interim Audit Testing Results. Contemporary Accounting Research. 23(3): 789 – 821.
Both U.S. and international auditing standards mandate auditors to adapt audit procedures as the risk of the audit engagement changes. As many financial statement accounts are interrelated (e.g., accounts payable and inventory), it is important for auditors to consider the relations between accounts when engaging in audit planning procedures and adjusting audit procedures for changes in risk. This study tests auditors’ responses to risk changes discovered during interim testing (potential fraud, error, or no problem). The study also explores the following two potential reasons why prior research has generally concluded that auditors are not very responsive to risk changes:
The research evidence was collected prior to 2004. The author uses a group of audit senior associates from both Big 4 and non-Big 4 audit firms to complete a simulated audit budgeting task from a website. Participants are first asked to read background information on the audit client, including the prior year audit budget and realized audit hours. Then, participants are asked to prepare an
initial budget for audit hours allocated to five financial statement accounts. Next, participants view the results from interim testing procedures (where the potential fraud, error, or no problem arises) and are then asked to indicate the amount of audit hours they would budget for the year-end audit work, representing their response to the change (or no change) in risk.
This research furthers the understanding of auditors’ judgment performance in four important ways. We show that
For more information on this study, please contact David Plumlee.
Plumlee, R. D., B. Rixom, and A. Rosman. 2015. Training auditors to perform analytical procedures using metacognitive skills. The Accounting Review 90 (1): 351-369.
Auditors encounter many ill-structured tasks. Due, in part, to their greater technical knowledge, partners and managers perform these tasks better than less experienced auditors. Partners and managers also have in their memories a diverse set of problem solutions gained from their experience that they can retrieve as needed to organize and solve ill-structured problems. Less experienced auditors do not have access to these additional experiences and may benefit from a more structured approach to thinking while solving ill-structured tasks. We believe that training less experienced auditors in in metacognition—consciously thinking about one’s thought process—will help close the performance gap. We chose to train less experienced auditors to use a sequential thought process comprised of two metacognitive skills: divergent thinking, where they generate explanations for unusual evidence, followed by convergent thinking, where they evaluate explanations generated and eliminate those judged infeasible. Training less experienced auditors in the proper use of these skills was expected to provide them with the problem-structuring knowledge that managers and partners acquire through their frequent encounters with ill-structured situations.
Auditors with approximately two years of experience were randomly assigned to receive training in either divergent and convergent thinking skills, only divergent, or neither (a control). The training included four separate self-paced online sessions over two weeks. At the end of each session, we measured participants’ comprehension of the training and their ability to apply the specific skills addressed in that session. The fourth session synthesized the previous sessions and included a comprehensive analytical review case to measure whether the training resulted in better performance.
We found that
Analytical procedures are used frequently and increasingly are relied upon as substantive evidence. Based on this study, auditors are insensitive to the impreciseness of the analytical procedure when the results are favorable and may be a cause for over-reliance on weak evidence. Performing a stronger, more precise analytical procedure caused participants in the favorable outcome situation to become more aware of the weakness of the initial procedure and re-evaluate their evidence strength rating. Further, evidence suggests that having auditors consider the possible weaknesses of an analytical procedure prior to performing the procedure will cause them to rate the strength of the evidence from a weak analytical procedure lower. Overall, this suggests a need to better train auditors in performing and interpreting analytical procedures.
In a discussion of Glover et al.’s paper, McDaniel asks whether the findings may indicate that auditors in the unfavorable outcome (i.e. there is a material difference) are under-relying on the evidence rather than that auditors in the favorable outcome (no material difference) are over-relying on the evidence. Glover et al. respond that the over-relying of the evidence is of concern to regulators and the alternative does not explain all of the results. McDaniel also notes that the case study was of a company in the financial industry but that the participants were not required to have any financial industry experience. Glover et al. note that the interest income item is the issue which is not specific to the industry or complicated. McDaniel also notes concerns about a potential “anchoring” effect as the participants performed their analytical procedures based on prior year working paper results. In response, Glover et al. discuss this feature of an audit.
Glover, S. M., D. F. Prawitt, and T. J. Wilks. 2005. Why Do Auditor’s Over-Rely on Weak Analytical Procedures? The Role of Outcome and Precision. Auditing: A Journal of Practice & Theory 24 (Supplement): 197-220.
McDaniel, L. 2005. DISCUSSION OF Why Do Auditor’s Over-Rely on Weak Analytical Procedures? The Role of Outcome and Precision. Auditing: A Journal of Practice & Theory 24 (Supplement): 221-228.
Glover, S. M., D. F. Prawlitt, and T. J. Wilks. 2005. REPLY TO DISCUSSION OF Why Do Auditor’s Over-Rely on Weak Analytical Procedures? The Role of Outcome and Precision. Auditing: A Journal of Practice & Theory 24 (Supplement): 229-232.
In 2000, a Public Oversight Board panel viewed audit work papers and determined that 20% of the time substantive analytical procedures were weak and provided insufficient evidence to support the conclusion. This study aims to look at one of the possible reasons why auditors’ over-rely on weak, unreliable analytical procedures. The authors hypothesize that auditors do not consider their existing knowledge about the quality of the procedure when the outcome indicates that the balance is “fairly stated.”
The authors performed two experiments prior to 2005 where a material misstatement exists and a “weak, unreliable” (highly aggregated) analytical procedure is used. In experiment 1, senior associates from one Big 4 accounting firm were asked to perform an interest revenue analytical procedure at the annual grand total level and compare the results to the client’s unaudited balance. The balance is manipulated so that some participants’ results indicate there is no significant difference (i.e. favorable outcome) and the other participants’ results indicate that there is a significant difference (i.e. unfavorable outcome). Participants evaluated the strength of the analytical procedure and concluded regarding a misstatement. Additional disaggregated computations (interest revenue calculations broken down by type of loan and performed quarterly vs. annual basis) were then provided. Participants responded to the procedure strength of the aggregated analytical procedure. In experiment 2, different senior associates from one Big 4 accounting firm were asked to document the weaknesses of the analytical procedure prior to performing the procedure.
Experiment 1
Experiment 2