PetGenAI: Automated Pet Portrait Generation

A scalable ML-powered service that automates pet portrait creation for printing factories, transforming customer photos into unique artwork while maintaining quality and handling high-volume demand.

PetGenAI Landing Page - ML-Powered Pet Portrait Generation

Overview & Goal

A printing factory wanted to offer a scalable personalization service where customers could upload photos of their pets, and designers would create unique artwork based on those images. The existing process was slow and labor-intensive, requiring manual editing for every order. The goal was clear: automate and scale the design pipeline so a single designer could handle up to 10x more orders without compromising quality.

PetGenAI ML Pipeline - Automated Image Processing Workflow

Challenges

Building a scalable personalization pipeline involved multiple layers of complexity:

  • High-Volume User Media: Handling potentially many gigabytes of customer-uploaded images daily.
  • Scalability: Managing workloads that varied from steady daily demand to sharp traffic spikes around promotions.
  • Consistency of Generated Artwork: Ensuring the pet subject remained recognizable across multiple styles and post-processing steps.
  • Integration with Existing Systems: Seamlessly connecting to the factory's CRM.
  • Hybrid Model Hosting: Balancing local GPU workloads with external API calls to manage costs while keeping latency low.

Solution

We delivered a distributed ML-powered service that automated image processing, design generation, and delivery into the factory's workflow.

Key System Components

1. Order Ingestion

Orders and customer images automatically ingested via Amazon Seller API. Parsed into structured entities with validation flows for image quality and metadata.

2. Distributed ML Pipeline

Stable Diffusion-based models fine-tuned for pet artwork generation. Multi-step pipeline combining inpainting, style transfer, and subject-consistency modules. Automated post-processing with Pillow and Adobe API integration.

3. Scalable Hybrid Infrastructure

Local models deployed in AWS ECS clusters with GPU-backed scaling. Traffic spikes offloaded to external API calls triggered by AWS Lambda + SQS. Outputs stored in AWS S3 and synced with factory CRM for order tracking.

4. Designer Workflow Integration

Designers reviewed AI-generated drafts through a custom CMS dashboard. System allowed light manual corrections while reducing initial editing workload by 80–90%.

Design Decisions

  • Stable Diffusion chosen for its adaptability and open ecosystem of fine-tuning and LORAs.
  • AWS ECS ensured GPU scaling and reliable batch inference for large order sets.
  • AWS Lambda + SQS offloaded sudden surges without over-provisioning costly GPU instances.
  • Pipeline Modularity: Each stage (segmentation, inpainting, style application) isolated for debugging, replacement, and scaling.
  • Adobe API integration preserved final professional quality and compatibility with existing print workflows.

This project demonstrates end-to-end ML engineering: from model fine-tuning to production deployment, handling real-world scalability challenges while maintaining quality standards.

Project Info

  • Role: ML Engineer
  • Type: Production ML System
  • Date: 2024
  • Scale: High-Volume Production

Tech Stack

  • ML Models: Stable Diffusion, PyTorch
  • Image Processing: Pillow, Adobe API
  • Backend: Django + Celery
  • Infrastructure: AWS ECS, S3, SQS, Lambda
  • Data Handling: Amazon Seller API, Custom CMS
  • Monitoring: W&B, Custom Dashboards