
👋 Hi, I’m Amber Chang
I’m a researcher and creative technologist exploring the space where computer vision, graphics, and machine learning meet. My work focuses on building generative and interactive ML systems that blend computational creativity with human-centered design.
I’m especially passionate about:
- Multi-modal intelligence
- ML-driven art & creative tools
- Reasoning + generation pipelines
- High-performance simulation and inference
After completing my master’s in scientific computing, I’m now looking for opportunities in data science, machine learning, or AI research, where I can apply cutting-edge models to meaningful real-world problems.
🚀 What I Do
- Build multimodal and generative systems
- Design pipelines for text–video diffusion
- Explore acceleration strategies for inference
- Create interactive ML-driven art experiences
- Bridge engineering, creativity, and large-scale data
🔑 Skills at a Glance
Machine Learning: Diffusion Models · MLLMs · ViT · Uncertainty Quantification · Deep Learning
AI Tools: HuggingFace · LangChain · PyTorch · TensorFlow
Programming: Python · CUDA · C/C++ · R · MPI · OpenMP
Systems: Linux · HPC · Docker · Virtualization
Math & Modeling: Probability · Statistical ML · Scientific Computing · Parallel Computing
🎓 Education
- MSc Scientific Computing & Data Analytics, Durham University (UK)
- Applied Data Science, MIT (Online)
- BA Foreign Languages & Literature, National Taiwan University
Minor: Art & Design
🔬 Research Highlights
Text-to-Video Generation with an MLLM Pipeline (ALgoverse)
Second Author — Submitted to ICLR 2026
- Designed and implemented the end-to-end pipeline for temporal refinement and reasoning-driven prompting.
- Built temporal interpolation experiments that improved video consistency and detail.
- Surveyed alignment and evaluation strategies using synthetic data and retrieval-augmented prompting.
Uncertainty Quantification for Reservoir Simulation
Durham University
- Applied Quasi–Monte Carlo methods to large-scale reservoir models.
- Ran 1000+ HPC physics simulations over 200-year horizons, achieving 10× speedup.
Facial Emotion Detection
MIT IDSS
- Trained deep vision models (CNNs, VGG-Net) for emotion recognition.
- Achieved 80.02% accuracy using careful regularization and augmentation.
💼 Experience
AI Student Researcher — ALgoverse
Remote, 2025
Working on multimodal intelligence, reasoning-guided generation, and creative ML systems.
R&D Intern — Durham University & OPENGO SIM
Durham, UK, 2024
Developed HPC-ready UQ pipelines, optimized sampling frameworks, and improved simulation speed by 10×.
Data Analyst — Ministry of Foreign Affairs
Taipei, Taiwan, 2021
Analyzed global trends, produced policy reports, and collaborated across government teams.
- ACML Generative AI Workshop – Italy (2025)
- Creative AI – London (2025)
- IEEE IoT Seasonal School – UK (2024)
- Member: ACM, WiGraph
- Women in High Performance Computing (WHPC)
- Contributor: UM-Bridge (Open Source)
📘 Coursework Snapshot
HPC: Data Structures & Algorithms (A+) · GPU Programming (A) · Parallel Computing
AI/ML: Intro to Machine Learning (A+) · Advanced ML I/II (A)
Math: Linear Algebra (A+) · Probability · Numerical Algorithms
✨ Let’s Connect
If you’re working on AI, vision, creativity tools, or multimodal systems, I’d love to chat.
Feel free to reach out!