IBM partner Carrington Associates has played a key role in helping the Federal Court of Australia recognise the benefits of using artificial intelligence in areas of family law.
During IBM Think Summit in Sydney, Carrington revealed it developed an AI tool as part of a proof-of-concept, whereby IBM Watson was trained to understand consent orders and outcomes in property matters only in family law using about 1,600 anonymised data sets. These were split between the applicant and respondent, peering into the ways in which decisions were made, and the outcomes that were achieved from those particular cases.
The pilot program looked how AI can be used to provide parties with information about how they might consider splitting their assets in a property matter, as well as for lawyers to leverage off the technology when giving legal advice.
When it comes to dividing assets between couples, certain metrics are used such as age, longevity of the relationship, income, material assets and if there are any children in the relationship.
Carrington engaged with the Federal Court in design thinking workshops, spanning over two days, with the proof-of-concept taking about 14 to 20 weeks to develop.
“We did a couple of workshops with them in terms of how we're going to look at the data sets, privacy issues, the type of environment, and addressing concerns about governance,” Carrington Associates managing director Sachin Khisti said.
Khisti said it worked closely with the IBM commercial and enterprise team, which presented them with the pilot program opportunity last year.
“The Federal Court was interested in looking at AI.The products are becoming much more mature than what they were a couple of years ago, and this is just a starting point," he said.
During the pilot program, the machine learning tool returned a high accuracy rating of 94 per cent, according to digital practice lead for the Federal Court of Australia Jessica Der Matossian. On average, the Federal Court processes 1.5 million documents per month.
"Being a Court, we’re limited in how we can use AI," Der Matossian said. “To use it as a decision-making tool just isn’t possible for us. AI isn’t transparent enough and it raises a lot of liability and legal questions, such as ‘can I have access to the algorithms? Are the algorithms a trade secret for the court? Who’s liable for decisions made by AI? Do I have a right to appeal a decision made by AI? What does that appeal process look like?’
“Most important role for the court is to be transparent and unbiased.”
But the pilot program threw up some interesting challenges for the Federal Court, such as the way it stores data and understanding the human element when it came to asset division.
Extracting the data was no easy feat, with most of it being stored on PDF documents, so to overcome this hurdle, Carrington had to use OCR (optical character recognition). The next step in the process involved data wrangling followed through with building the AI model. This involved a short list of machine learning algorithms and optimising the model so it could be deployed in a Watson Studio.
“The levels of accuracy that we were able to achieve from that small data set, spoke volumes for its potential use,” Carrington technical assistant Varun Vijaywargi said.
“The data stays anonymous, so any issues around the users privacy has already been taken care of. We had some great minds working behind this application, but at the same time we needed the right tools such as IBM.”
The successful pilot program will now turn to using IBM OpenScale, which aims to help build trust in the use of AI.
IBM launched OpenScale in October last year, in an effort to explain how AI models are making decisions to build trust and transparency in AI applications.
“The Federal Court were really interested to see if AI works, the accuracy, and if it met their expectations. We’re now moving further into this - looking into more data sets, and other avenues as well such as chatbots,” Khisti said.