Welcome to My Space!


I'm Peiran Wang, I come from Liangping, a small city in Chongqing, China.

I'm currently a new CS PhD student in UCLA, looking forward to see all guys in California!

I'm an ENTP. 100% E😁. I love to make jokes and rants when chatting. I'm very casual and straightforward😄. I tend to be straightforward and don't like to beat around the bush.

Education
  • University of California, Los Angeles
    University of California, Los Angeles
    Department of Computer Science
    Ph.D. Student
    Sep. 2025 - present
  • Tsinghua University
    Tsinghua University
    Master in Cybersecurity
    Sep. 2022 - Jul. 2025
  • Sichuan University
    Sichuan University
    Bachelor in Cybersecurity
    Sep. 2018 - Jul. 2022
UCLA Security Lab

I'm a member of UCLA Security Lab!

Visit our lab!

2024 CCS Distinguished Paper Award

My work in UCSD, Moderator is awarded Distinguished Paper Award @ CCS 2024!

We present Moderator, a policy-based model management system that allows administrators to specify fine-grained content moderation policies and modify the weights of a text-to-image (TTI) model to make it significantly more challenging for users to produce images that violate the policies. In contrast to existing general-purpose model editing techniques, which unlearn concepts without considering the associated contexts, Moderator allows admins to specify what content should be moderated, under which context, how it should be moderated, and why moderation is necessary. Given a set of policies, Moderator first prompts the original model to generate images that need to be moderated, then uses these self-generated images to reverse fine-tune the model to compute task vectors for moderation and finally negates the original model with the task vectors to decrease its performance in generating moderated content. We evaluated Moderator with 14 participants to play the role of admins and found they could quickly learn and author policies to pass unit tests in approximately 2.29 policy iterations. Our experiment with 32 stable diffusion users suggested that Moderator can prevent 65% of users from generating moderated content under 15 attempts and require the remaining users an average of 8.3 times more attempts to generate undesired content.

$a^2 + b^2 = c^2$

Cats