Skip to content
View jxzhangjhu's full-sized avatar
๐ŸŽฏ
Focusing
๐ŸŽฏ
Focusing

Block or report jxzhangjhu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
jxzhangjhu/README.md

Hi there, I am Jiaxin ๐Ÿ‘‹!

๐Ÿ”ญ I am a Senior Staff Research Scientist at Intuit AI Research where my focus is Generative AI (large language models (LLMs), and diffusion models), and AI Robustness & Safety (uncertainty, reliability, and trustworthiness) with extensive applications to complex real-world tasks. Previously, I was a Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory where my research aims at accelerating AI for Science on supercomputers, such as Summit and Frontier. I received my Ph.D. from the Johns Hopkins University with an emphasis on uncertainty quantification (UQ).

๐Ÿ“ซ You may find more information through my personal website and feel free to contact me via email at [email protected].

๐Ÿ˜„ Some recent publications in LLMs (full publication list in Google Scholar)

Jiaxin's GitHub stats

Pinned Loading

  1. SURGroup/UQpy SURGroup/UQpy Public

    UQpy (Uncertainty Quantification with python) is a general purpose Python toolbox for modeling uncertainty in physical and mathematical systems.

    Python 279 81

  2. zhuohangli/GGL zhuohangli/GGL Public

    A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".

    Jupyter Notebook 58 15

  3. Awesome-LLM-Uncertainty-Reliability-Robustness Awesome-LLM-Uncertainty-Reliability-Robustness Public

    Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models

    682 47

  4. intuit/sac3 intuit/sac3 Public

    Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency

    Jupyter Notebook 33 7

  5. Awesome-LLM-RAG Awesome-LLM-RAG Public

    Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models

    999 62

  6. intuit-ai-research/DCR-consistency intuit-ai-research/DCR-consistency Public

    DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models

    Python 22 3