charmingcompanions.com

Unlocking Stable Diffusion: A Comprehensive Guide for Mac Users

Written on

Setting Up Stable Diffusion on Your Mac

This guide will help you set up and explore Stable Diffusion on your MacBook. Before you begin, ensure you have Python and other necessary packages installed.

Step 1: Install Conda

To start, download the Anaconda installer from the following link:

After downloading, verify the integrity of the file using this command:

hasum -a 256 ~/Downloads/Anaconda3-2022.10-MacOSX-arm64.sh

You should see a hash that matches:

200700077db8eed762fbc996b830c3f8cc5a2bb7d6b20bb367147eb35f2dcc72

Next, execute the installation script with the command:

bash Anaconda3–2022.10-MacOSX-arm64.sh

Follow the prompts to complete the installation.

Step 2: Install Python

For this setup, we need Python version 3.7, 3.8, or 3.9. We will use 3.8. Create a new Conda environment with the command:

conda create -n coreml_stable_diffusion python=3.8 -y

Then, activate your new environment:

conda activate coreml_stable_diffusion

Resolving Tokenizers Error on macOS

A common issue that arises is the Tokenizers error. You can fix this by installing Rust:

Verify the installation with:

rustc --version

Step 3: Initialize the Stable Diffusion Project

Clone the Stable Diffusion repository with:

Navigate to the repository directory and install the required dependencies:

cd ml-stable-diffusion

pip install -r requirements.txt

Converting and Using the Model

To convert the Hugging Face PyTorch/TF models to Apple’s Core ML format, use the following command:

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o ./models

This process will take around 10 minutes, which includes downloading and converting the Hugging Face model. The default model being referenced is CompVis/stable-diffusion-v1-4. If you wish to use other versions, adjust the model-version parameter accordingly.

For Macs with 8GB of RAM, you may encounter insufficient memory alerts. To address this, execute the following commands one by one:

python -m python_coreml_stable_diffusion.torch2coreml --convert-vae-decoder -o ./models

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet -o ./models

python -m python_coreml_stable_diffusion.torch2coreml --convert-text-encoder -o ./models

python -m python_coreml_stable_diffusion.torch2coreml --convert-safety-checker -o ./models

Upon completion, you will find four large model files in the ./models directory.

Verifying File Integrity After Conversion

You can verify the model conversion with the following command:

python -m python_coreml_stable_diffusion.pipeline --prompt "magic book on the table" -i ./models -o ./output --compute-unit ALL --seed 93

This command performs several tasks: it adds the models, saves the output, allows CPU/GPU usage, and uses a seeded random variable for reproducibility. The prompt used here is "a magic book on a table".

After running the command, it may take a few minutes to generate the image.

Important Note

You might see a warning regarding Torch version compatibility:

WARNING:coremltools:Torch version 1.13.0 has not been tested with coremltools.

If errors arise, consider downgrading to Torch version 1.12.1, which is the latest tested version.

Step 4: Simplifying Image Generation with a Web UI

To enhance the image generation experience, we will create a web interface using Gradio, a Python library for building web applications.

Create a file named web.py with the following content:

import python_coreml_stable_diffusion.pipeline as pipeline

import gradio as gr

from diffusers import StableDiffusionPipeline

def init(args):

pipeline.logger.info("Initializing PyTorch pipe for reference configuration")

pytorch_pipe = StableDiffusionPipeline.from_pretrained(args.model_version, use_auth_token=True)

coreml_pipe = pipeline.get_coreml_pipe(pytorch_pipe=pytorch_pipe, mlpackages_dir=args.i, model_version=args.model_version, compute_unit=args.compute_unit)

def infer(prompt, steps):

pipeline.logger.info("Beginning image generation.")

image = coreml_pipe(prompt=prompt, height=coreml_pipe.height, width=coreml_pipe.width, num_inference_steps=steps)

return [image["images"][0]]

demo = gr.Blocks()

with demo:

gr.Markdown("<center><h1>Core ML Stable Diffusion</h1>Run Stable Diffusion on Apple Silicon with Core ML</center>")

with gr.Group():

with gr.Box():

with gr.Row():

with gr.Column():

text = gr.Textbox(label="Prompt", lines=11, placeholder="Enter your prompt")

btn = gr.Button("Generate image")

steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=10, step=1)

with gr.Column():

gallery = gr.Gallery(label="Generated image", elem_id="gallery")

text.submit(infer, inputs=[text, steps], outputs=gallery)

btn.click(infer, inputs=[text, steps], outputs=gallery)

demo.launch(debug=True, server_name="0.0.0.0")

if __name__ == "__main__":

parser = pipeline.argparse.ArgumentParser()

parser.add_argument("-i", required=True, help=("Path to input directory with the .mlpackage files generated by python_coreml_stable_diffusion.torch2coreml"))

parser.add_argument("--model-version", default="CompVis/stable-diffusion-v1-4", help=("The pre-trained model checkpoint and configuration to restore."))

parser.add_argument("--compute-unit", choices=pipeline.get_available_compute_units(), default="ALL", help=("The compute units to be used when executing Core ML models."))

args = parser.parse_args()

init(args)

Save this file in the python_coreml_stable_diffusion directory, and run the following command:

python -m python_coreml_stable_diffusion.web -i ./models --compute-unit ALL

You should see logs indicating that the web service is running. Access it via http://0.0.0.0:7860.

Testing the Web Interface

In your browser, input "colorful star trails" as the prompt and click "Generate image." Wait for the image to appear on the right side.

Now, generating images has become much more straightforward. You only need to modify the prompt text and click a single button to produce your image, which saves you the hassle of adjusting command parameters or navigating through files.

If you found this guide beneficial, feel free to clap and follow me. Join Medium via this link to access all premium articles from me and other talented writers on the platform.

This video tutorial covers the basics of running Stable Diffusion on a Mac, providing a visual guide to the setup process.

In this video, learn how to efficiently run Stable Diffusion on an Apple Silicon Mac with straightforward steps.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Understanding Common Marital Mistakes That Can Lead to Divorce

Explore key mistakes couples make that can jeopardize their marriage and learn how to navigate these challenges effectively.

Embracing Tranquility: Balancing Rest in a Fast-Paced Society

Discover the importance of rest and how to break free from the guilt of taking time for yourself amid a busy lifestyle.

A Hard-Earned Lesson on Scams: Insights from Prison Experience

Discover how a lesson from prison helped identify a scam during a seemingly kind encounter.

Navigating Life at 26: Lessons and Insights for Growth

Explore valuable insights for self-improvement and personal growth as you navigate your mid-twenties.

Unlocking 7 Profitable ChatGPT Plugins for Young Entrepreneurs

Discover seven ChatGPT plugins that can help young entrepreneurs earn $240,000+ annually from the comfort of their homes.

The Essential Values that Weave the Fabric of Our Lives

Discover the fundamental values that enrich our lives and guide our journey through existence.

Building Authentic Business Relationships in the Digital Era

Discover how to foster genuine connections and enhance your career in the digital networking landscape.

Understanding the Dark Side: Insights on Evil Spirits

Explore experiences and insights on the impact of evil spirits and how to protect yourself from negative energies.