The presence of Julie Foerster, the IRS project director for digital assets, at Consensus 2023 appeared to be something of a listening and get-to-know-you session. "We are engaging with you all so we get it right and build a plan," she said, according to Coinbase. She “highlighted that the landscape for digital assets is an evolving one, and emphasized a need to increase communications between the agency and the crypto community.” Foerster also said the IRS “needs to look at the skills of the people we have today and those we will bring on in the future,” CoinBase reported. “We need to have the right tools and the right people." The strategic plan the IRS released last month for its $80 billion in new funding includes hiring more experts to deal with digital assets and developing “New analytics-enabled capabilities … to support digital asset compliance, a “milestone” it set for fiscal year 2024. It also plans to have its “Information Returns platform enhanced to support digital asset reporting” during FY 2024. AI REPORT CARD: There’s been plenty of chatter lately about how artificial intelligence might affect the tax world, for good or ill. As our Benjamin Guggenheim reported in this space last month, one expert found that the buzzy bot ChatGPT did a pretty good job when asked to cook up tax scams. On the flipside, the IRS is using AI to catch tax cheats and improve customer service. It’s the same in the accounting industry — some see it as a threat to their livelihood, others as a powerful tool they can put to work. (Even ChatGPT has its own opinion about that, according to Accounting Today.) Now, Brigham Young University is out with a report measuring how well ChatGPT would do on accounting exams. The massive, crowd-sourced project involved 186 universities in 14 countries, which contributed a total of 25,181 exam questions and 2,268 textbook test questions. Comparing the bot’s performance to that of human students, the researchers found the students won, hands down, with an overall average score of 76.7% compared to 47.4% for ChatGPT. Among the bot's worst subjects: taxes. “On 11.3% of questions, ChatGPT scored higher than the student average, doing particularly well on AIS and auditing. But the AI bot did worse on tax, financial, and managerial assessments, possibly because ChatGPT struggled with the mathematical processes required for the latter type,” a summary of the study said. One caveat: The researchers used the original version of ChatGPT, not the newest product, GPT-4. The report's authors "fully expect GPT-4 to improve exponentially on the accounting questions posed in their study” and the issues the bot stumbled over, according to the summary. “What they find most promising is how the chatbot can help improve teaching and learning, including the ability to design and test assignments, or perhaps be used for drafting portions of a project.”
|
No comments:
Post a Comment