Advisory report begins integration of generative AI at U-M

Topics:

A committee looking into how generative artificial intelligence affects University of Michigan students, faculty, researchers and staff has issued a report that attempts to lay a foundation for how U-M will live and work with this new technology.

Recommendations include:

  • Establishing a universitywide initiative to leverage GenAI in developing tools and methodologies for AI-augmented education and research.
  • Creating best-practice standards for privacy protections, data use controls and research integrity when using GenAI.
  • Expanding existing information technology infrastructure to accommodate secure and equitable access to GenAI platforms, among other suggestions.
MORE INFORMATION

The report is available to the public at a website created by the committee and Information and Technology Services to guide how faculty, staff and students can responsibly and effectively use GenAI in their daily lives.

U-M also has announced it will release its own suite of university-hosted GenAI services that are focused on providing safe and equitable access to AI tools for all members of the U-M community. They are expected to be released before students return to campus this fall.

“GenAI is shifting paradigms in higher education, business, the arts and every aspect of our society. This report represents an important first step in U-M’s intention to serve as a global leader in fostering the responsible, ethical and equitable use of GenAI in our community and beyond,” said Laurie McCauley, provost and executive vice president for academic affairs.

The report offers recommendations on everything from how instructors can effectively use GenAI in their classrooms to how students can protect themselves when using popular GenAI tools, such as ChatGPT, without exposing themselves to risks of sharing sensitive data.

“More than anything, the intention of the report is to be a discussion starter,” said Ravi Pendse, vice president for information technology and chief information officer. “We have heard overwhelmingly from the university community that they needed some direction on how to work with GenAI, particularly before the fall semester started. We think this report and the accompanying website are a great start to some much-needed conversations.”

McCauley and Pendse sponsored the creation of the Generative Artificial Intelligence Advisory Committee in May. Since then, the 18-member committee — composed of faculty, staff and students from across all segments of U-M — has worked together to provide vital insights into how GenAI technology could affect their communities.

“Our goals were to present strategic directions and guidance on how GenAI can enhance the educational experience, enrich research capabilities, and bolster U-M’s leadership in this era of digital transformation,” said committee chair Karthik Duraisamy, professor of aerospace engineering and of mechanical engineering, and director of the Michigan Institute for Computational Discovery and Engineering.

“Committee members put in an enormous amount of work to identify the potential benefits of GenAI to the diverse missions of our university, while also shedding light on the opportunities and challenges of this rapidly evolving technology.”

“This is an exciting time,” McCauley added. “I am impressed by the work of this group of colleagues. Their report asks important questions and provides thoughtful guidance in a rapidly evolving area.”

Pendse stressed the GenAI website will be constantly updated and will serve as a hub for the various discussions related to the topic across U-M.

“We know that almost every group at U-M is having their own conversations about GenAI right now,” Pendse said. “With the release of this report and the website, we hope to create a knowledge hub where students, faculty and staff have one central location where they can come looking for advice. I am proud that U-M is serving both as a local and global leader when it comes to the use of GenAI.”

Tags:

Comments

  1. Ashley Daniels
    on July 26, 2023 at 9:58 am

    This is as disappointing to hear as it is worrying. For all the important, “hard” questions that the team is asking, no one is asking whether we should even touch something like ChatGPT, which has been shown time and again to scrape together bits of outright stolen data and outright fabricate things to present as fact. I have my doubts that a tool that has been shown to outright plagiarize and give inaccurate information is able to provide any unalloyed good to the field of medicine, let alone anything outside of it.

    • Stephany Daniel
      on July 26, 2023 at 10:47 am

      Thank you for saying this, Ashley. I too find this troubling. Most of what is called generative AI would be more accurately described as plagiarism software, as it pulls language and art from creators who never consented to have their work used to train the models in the first place and are not compensated for its use. Even if the goal is for U-M to develop its own version, where will it be taking its data from to train the software? The work of U-M students, faculty, and staff? I had hoped that the ethical and potential legal issues inherent in this technology would give the committee more pause.

  2. Maxwell Preissner
    on July 26, 2023 at 10:57 am

    I think this is super exciting! While there are risks associated with this generative AI such as privacy and inaccurate information, there is so much that can be done with it to speed up processes and workflows, as well as using it to help brainstorm or generate ideas, summarize information, and much more.

    People are going to use it one way or another because it can save hours of time and help you come up with things you never would have without it, so I am glad the University is taking this step to provide access and information for the safe and proper use of it.

  3. John Umbriac
    on July 28, 2023 at 3:02 pm

    I understand your concern Ashley. Image generators especially have a problem of directly reproducing their training data without credit to the original artist. With language models the input -> output relation is not quite so clear. It’s worth thinking about what differentiates the transformation that ChatGPT performs from what we humans do when we read and reflect on writing. I don’t believe that the ability to write something meaningfully transformative from source material is unique to humans, and considering this technology is just starting to be developed and iterated upon, its capabilities are only going to get better. Even if we put a line in the sand between transformative and plagiarizing, that just acts as a new goal for the technology to achieve.

Leave a comment

Commenting is closed for this article. Please read our comment guidelines for more information.