starsfriday commited on
Commit
4d06fc9
·
verified ·
1 Parent(s): 0e4fb4d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ result/result1.png filter=lfs diff=lfs merge=lfs -text
37
+ result/result2.png filter=lfs diff=lfs merge=lfs -text
38
+ result/result3.png filter=lfs diff=lfs merge=lfs -text
39
+ result/test.jpg filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen-Image-Edit-2509
7
+ tags:
8
+ - image-generation
9
+ - lora
10
+ - Qwen-Image
11
+ pipeline_tag: image-to-image
12
+ library_name: diffusers
13
+ widget:
14
+ - text: >-
15
+ Upscale this picture to 4K resolution.
16
+ output:
17
+ url: result/result1.png
18
+ - text: >-
19
+ Upscale this picture to 4K resolution.
20
+ output:
21
+ url: result/result2.png
22
+ - text: >-
23
+ Upscale this picture to 4K resolution.
24
+ output:
25
+ url: result/result3.png
26
+
27
+ ---
28
+ # valiantcat Qwen-Image-Edit-2509 LoRA
29
+
30
+ <Gallery />
31
+
32
+ ## Model Card for Model ID
33
+
34
+ <!-- Provide a quick summary of what the model is/does. -->
35
+
36
+ This is a model for High-definition magnification of the picture, trained on ```Qwen/Qwen-Image-Edit-2509```, and it is mainly used for losslessly enlarging images to approximately 2K size.For use in ```ComfyUI```.
37
+
38
+ <div style="background-color: white; padding: 15px; border-radius: 8px; margin: 15px 0; box-shadow: 0 2px 4px rgba(0,0,0,0.1);">
39
+ <h2 style="color: #24292e; margin-top: 0;">ComfyUI Workflow</h2>
40
+ <p>This LoRA works with a modified version of <a href="https://huggingface.co/valiantcat/Qwen-Image-Edit-2509-Upscale2K/blob/main/Upscale.json" style="color: #0366d6; text-decoration: none;">Comfy's Qwen/Qwen-Image-Edit-2509 workflow</a>. The main modification is adding a Qwen/Qwen-Image-Edit-2509 LoRA node connected to the base model.</p>
41
+ <p>See the Downloads section above for the modified workflow.</p>
42
+ </div>
43
+
44
+ ### Direct Use
45
+
46
+ ```
47
+ from diffusers import QwenImageEditPipeline
48
+ import torch
49
+ from PIL import Image
50
+
51
+ # Load the pipeline
52
+ pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2509")
53
+ pipeline.to(torch.bfloat16)
54
+ pipeline.to("cuda")
55
+
56
+ # Load trained LoRA weights for in-scene editing
57
+ pipeline.load_lora_weights("valiantcat/Qwen-Image-Edit-2509-Upscale2K", weight_name="qwen_image_edit_2509_upscale.safetensors")
58
+
59
+ # Load input image
60
+ image = Image.open("./result/test.jpg").convert("RGB")
61
+
62
+ # Define in-scene editing prompt
63
+ prompt = "Upscale this picture to 4K resolution."
64
+
65
+ # Generate edited image with enhanced scene understanding
66
+ inputs = {
67
+ "image": image,
68
+ "prompt": prompt,
69
+ "generator": torch.manual_seed(12345),
70
+ "true_cfg_scale": 4.0,
71
+ "negative_prompt": " ",
72
+ "num_inference_steps": 50,
73
+ }
74
+
75
+ with torch.inference_mode():
76
+ output = pipeline(**inputs)
77
+ output_image = output.images[0]
78
+ output_image.save("edited_image.png")
79
+
80
+
81
+ ```
82
+
83
+ ## Trigger phrase
84
+
85
+ ```Upscale this picture to 4K resolution.```
86
+
87
+ There is no fixed trigger word. The specific removal prompt needs to be tested more
88
+
89
+ ## Download model
90
+
91
+ Weights for this model are available in Safetensors format.
92
+
93
+ [Download](https://huggingface.co/valiantcat/Qwen-Image-Edit-2509-Upscale2K)
94
+
95
+ ## Training at Chongqing Valiant Cat
96
+
97
+ This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
Upscale.json ADDED
@@ -0,0 +1,1128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "91f6bbe2-ed41-4fd6-bac7-71d5b5864ecb",
3
+ "revision": 0,
4
+ "last_node_id": 112,
5
+ "last_link_id": 192,
6
+ "nodes": [
7
+ {
8
+ "id": 99,
9
+ "type": "Seed (rgthree)",
10
+ "pos": [
11
+ 420.96197406560714,
12
+ 466.84280189097996
13
+ ],
14
+ "size": [
15
+ 210,
16
+ 130
17
+ ],
18
+ "flags": {},
19
+ "order": 0,
20
+ "mode": 0,
21
+ "inputs": [],
22
+ "outputs": [
23
+ {
24
+ "dir": 4,
25
+ "label": "随机种",
26
+ "name": "SEED",
27
+ "shape": 3,
28
+ "type": "INT",
29
+ "links": [
30
+ 162
31
+ ]
32
+ }
33
+ ],
34
+ "properties": {
35
+ "cnr_id": "rgthree-comfy",
36
+ "ver": "f044a9dbb3fc9de55c6244d616d386986add3072"
37
+ },
38
+ "widgets_values": [
39
+ -1,
40
+ "",
41
+ "",
42
+ ""
43
+ ]
44
+ },
45
+ {
46
+ "id": 38,
47
+ "type": "CLIPLoader",
48
+ "pos": [
49
+ -34.261653900146484,
50
+ 198.56765747070312
51
+ ],
52
+ "size": [
53
+ 330,
54
+ 110
55
+ ],
56
+ "flags": {},
57
+ "order": 1,
58
+ "mode": 0,
59
+ "inputs": [],
60
+ "outputs": [
61
+ {
62
+ "name": "CLIP",
63
+ "type": "CLIP",
64
+ "slot_index": 0,
65
+ "links": [
66
+ 75,
67
+ 167
68
+ ]
69
+ }
70
+ ],
71
+ "properties": {
72
+ "cnr_id": "comfy-core",
73
+ "ver": "0.3.48",
74
+ "Node name for S&R": "CLIPLoader",
75
+ "models": [
76
+ {
77
+ "name": "qwen_2.5_vl_7b_fp8_scaled.safetensors",
78
+ "url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors",
79
+ "directory": "text_encoders"
80
+ }
81
+ ],
82
+ "enableTabs": false,
83
+ "tabWidth": 65,
84
+ "tabXOffset": 10,
85
+ "hasSecondTab": false,
86
+ "secondTabText": "Send Back",
87
+ "secondTabOffset": 80,
88
+ "secondTabWidth": 65,
89
+ "widget_ue_connectable": {}
90
+ },
91
+ "widgets_values": [
92
+ "qwen_2.5_vl_7b_fp8_scaled.safetensors",
93
+ "qwen_image",
94
+ "default"
95
+ ]
96
+ },
97
+ {
98
+ "id": 39,
99
+ "type": "VAELoader",
100
+ "pos": [
101
+ -35.67443084716797,
102
+ 346.28021240234375
103
+ ],
104
+ "size": [
105
+ 330,
106
+ 60
107
+ ],
108
+ "flags": {},
109
+ "order": 2,
110
+ "mode": 0,
111
+ "inputs": [],
112
+ "outputs": [
113
+ {
114
+ "name": "VAE",
115
+ "type": "VAE",
116
+ "slot_index": 0,
117
+ "links": [
118
+ 76,
119
+ 171
120
+ ]
121
+ }
122
+ ],
123
+ "properties": {
124
+ "cnr_id": "comfy-core",
125
+ "ver": "0.3.48",
126
+ "Node name for S&R": "VAELoader",
127
+ "models": [
128
+ {
129
+ "name": "qwen_image_vae.safetensors",
130
+ "url": "https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors",
131
+ "directory": "vae"
132
+ }
133
+ ],
134
+ "enableTabs": false,
135
+ "tabWidth": 65,
136
+ "tabXOffset": 10,
137
+ "hasSecondTab": false,
138
+ "secondTabText": "Send Back",
139
+ "secondTabOffset": 80,
140
+ "secondTabWidth": 65,
141
+ "widget_ue_connectable": {}
142
+ },
143
+ "widgets_values": [
144
+ "qwen_image_vae.safetensors"
145
+ ]
146
+ },
147
+ {
148
+ "id": 3,
149
+ "type": "KSampler",
150
+ "pos": [
151
+ 705.1409596613103,
152
+ 34.84122260631187
153
+ ],
154
+ "size": [
155
+ 253.9251708984375,
156
+ 546.444091796875
157
+ ],
158
+ "flags": {},
159
+ "order": 12,
160
+ "mode": 0,
161
+ "inputs": [
162
+ {
163
+ "name": "model",
164
+ "type": "MODEL",
165
+ "link": 125
166
+ },
167
+ {
168
+ "name": "positive",
169
+ "type": "CONDITIONING",
170
+ "link": 168
171
+ },
172
+ {
173
+ "name": "negative",
174
+ "type": "CONDITIONING",
175
+ "link": 142
176
+ },
177
+ {
178
+ "name": "latent_image",
179
+ "type": "LATENT",
180
+ "link": 183
181
+ },
182
+ {
183
+ "name": "seed",
184
+ "type": "INT",
185
+ "widget": {
186
+ "name": "seed"
187
+ },
188
+ "link": 162
189
+ }
190
+ ],
191
+ "outputs": [
192
+ {
193
+ "name": "LATENT",
194
+ "type": "LATENT",
195
+ "slot_index": 0,
196
+ "links": [
197
+ 128
198
+ ]
199
+ }
200
+ ],
201
+ "properties": {
202
+ "cnr_id": "comfy-core",
203
+ "ver": "0.3.48",
204
+ "Node name for S&R": "KSampler",
205
+ "enableTabs": false,
206
+ "tabWidth": 65,
207
+ "tabXOffset": 10,
208
+ "hasSecondTab": false,
209
+ "secondTabText": "Send Back",
210
+ "secondTabOffset": 80,
211
+ "secondTabWidth": 65,
212
+ "widget_ue_connectable": {}
213
+ },
214
+ "widgets_values": [
215
+ 674279300428617,
216
+ "randomize",
217
+ 4,
218
+ 1,
219
+ "euler",
220
+ "simple",
221
+ 1
222
+ ]
223
+ },
224
+ {
225
+ "id": 8,
226
+ "type": "VAEDecode",
227
+ "pos": [
228
+ 722.103545110529,
229
+ -59.922979511608055
230
+ ],
231
+ "size": [
232
+ 210,
233
+ 46
234
+ ],
235
+ "flags": {
236
+ "collapsed": false
237
+ },
238
+ "order": 13,
239
+ "mode": 0,
240
+ "inputs": [
241
+ {
242
+ "name": "samples",
243
+ "type": "LATENT",
244
+ "link": 128
245
+ },
246
+ {
247
+ "name": "vae",
248
+ "type": "VAE",
249
+ "link": 76
250
+ }
251
+ ],
252
+ "outputs": [
253
+ {
254
+ "name": "IMAGE",
255
+ "type": "IMAGE",
256
+ "slot_index": 0,
257
+ "links": [
258
+ 161,
259
+ 179
260
+ ]
261
+ }
262
+ ],
263
+ "properties": {
264
+ "cnr_id": "comfy-core",
265
+ "ver": "0.3.48",
266
+ "Node name for S&R": "VAEDecode",
267
+ "enableTabs": false,
268
+ "tabWidth": 65,
269
+ "tabXOffset": 10,
270
+ "hasSecondTab": false,
271
+ "secondTabText": "Send Back",
272
+ "secondTabOffset": 80,
273
+ "secondTabWidth": 65,
274
+ "widget_ue_connectable": {}
275
+ },
276
+ "widgets_values": []
277
+ },
278
+ {
279
+ "id": 66,
280
+ "type": "ModelSamplingAuraFlow",
281
+ "pos": [
282
+ 370.60226337224776,
283
+ -67.29079643665688
284
+ ],
285
+ "size": [
286
+ 300,
287
+ 58
288
+ ],
289
+ "flags": {},
290
+ "order": 11,
291
+ "mode": 0,
292
+ "inputs": [
293
+ {
294
+ "name": "model",
295
+ "type": "MODEL",
296
+ "link": 130
297
+ }
298
+ ],
299
+ "outputs": [
300
+ {
301
+ "name": "MODEL",
302
+ "type": "MODEL",
303
+ "links": [
304
+ 125
305
+ ]
306
+ }
307
+ ],
308
+ "properties": {
309
+ "cnr_id": "comfy-core",
310
+ "ver": "0.3.48",
311
+ "Node name for S&R": "ModelSamplingAuraFlow",
312
+ "enableTabs": false,
313
+ "tabWidth": 65,
314
+ "tabXOffset": 10,
315
+ "hasSecondTab": false,
316
+ "secondTabText": "Send Back",
317
+ "secondTabOffset": 80,
318
+ "secondTabWidth": 65,
319
+ "widget_ue_connectable": {}
320
+ },
321
+ "widgets_values": [
322
+ 3.1000000000000005
323
+ ]
324
+ },
325
+ {
326
+ "id": 7,
327
+ "type": "CLIPTextEncode",
328
+ "pos": [
329
+ 334.4206532648259,
330
+ 265.8027017933237
331
+ ],
332
+ "size": [
333
+ 350.8778076171875,
334
+ 138.01828002929688
335
+ ],
336
+ "flags": {
337
+ "collapsed": false
338
+ },
339
+ "order": 5,
340
+ "mode": 0,
341
+ "inputs": [
342
+ {
343
+ "name": "clip",
344
+ "type": "CLIP",
345
+ "link": 75
346
+ }
347
+ ],
348
+ "outputs": [
349
+ {
350
+ "name": "CONDITIONING",
351
+ "type": "CONDITIONING",
352
+ "slot_index": 0,
353
+ "links": [
354
+ 142
355
+ ]
356
+ }
357
+ ],
358
+ "title": "CLIP Text Encode (Negative Prompt)",
359
+ "properties": {
360
+ "cnr_id": "comfy-core",
361
+ "ver": "0.3.48",
362
+ "Node name for S&R": "CLIPTextEncode",
363
+ "enableTabs": false,
364
+ "tabWidth": 65,
365
+ "tabXOffset": 10,
366
+ "hasSecondTab": false,
367
+ "secondTabText": "Send Back",
368
+ "secondTabOffset": 80,
369
+ "secondTabWidth": 65,
370
+ "widget_ue_connectable": {}
371
+ },
372
+ "widgets_values": [
373
+ "",
374
+ [
375
+ false,
376
+ true
377
+ ]
378
+ ]
379
+ },
380
+ {
381
+ "id": 98,
382
+ "type": "PreviewImage",
383
+ "pos": [
384
+ 1027.363125565962,
385
+ 127.53016043368171
386
+ ],
387
+ "size": [
388
+ 252.37942504882812,
389
+ 369.21356201171875
390
+ ],
391
+ "flags": {},
392
+ "order": 14,
393
+ "mode": 0,
394
+ "inputs": [
395
+ {
396
+ "name": "images",
397
+ "type": "IMAGE",
398
+ "link": 161
399
+ }
400
+ ],
401
+ "outputs": [],
402
+ "properties": {
403
+ "cnr_id": "comfy-core",
404
+ "ver": "0.3.50",
405
+ "Node name for S&R": "PreviewImage"
406
+ },
407
+ "widgets_values": []
408
+ },
409
+ {
410
+ "id": 108,
411
+ "type": "CR SDXL Aspect Ratio",
412
+ "pos": [
413
+ 10.162493385662,
414
+ 475.82653395774093
415
+ ],
416
+ "size": [
417
+ 270,
418
+ 278
419
+ ],
420
+ "flags": {},
421
+ "order": 10,
422
+ "mode": 0,
423
+ "inputs": [
424
+ {
425
+ "name": "width",
426
+ "type": "INT",
427
+ "widget": {
428
+ "name": "width"
429
+ },
430
+ "link": 188
431
+ },
432
+ {
433
+ "name": "height",
434
+ "type": "INT",
435
+ "widget": {
436
+ "name": "height"
437
+ },
438
+ "link": 189
439
+ }
440
+ ],
441
+ "outputs": [
442
+ {
443
+ "label": "宽度",
444
+ "name": "width",
445
+ "type": "INT",
446
+ "links": []
447
+ },
448
+ {
449
+ "label": "高度",
450
+ "name": "height",
451
+ "type": "INT",
452
+ "links": []
453
+ },
454
+ {
455
+ "label": "放大系数",
456
+ "name": "upscale_factor",
457
+ "type": "FLOAT",
458
+ "links": null
459
+ },
460
+ {
461
+ "label": "批次大小",
462
+ "name": "batch_size",
463
+ "type": "INT",
464
+ "links": null
465
+ },
466
+ {
467
+ "name": "empty_latent",
468
+ "type": "LATENT",
469
+ "links": [
470
+ 183
471
+ ]
472
+ },
473
+ {
474
+ "name": "show_help",
475
+ "type": "STRING",
476
+ "links": null
477
+ }
478
+ ],
479
+ "properties": {
480
+ "cnr_id": "comfyroll",
481
+ "ver": "d78b780ae43fcf8c6b7c6505e6ffb4584281ceca",
482
+ "Node name for S&R": "CR SDXL Aspect Ratio"
483
+ },
484
+ "widgets_values": [
485
+ 1024,
486
+ 1024,
487
+ "custom",
488
+ "Off",
489
+ 1,
490
+ 1
491
+ ]
492
+ },
493
+ {
494
+ "id": 106,
495
+ "type": "ImageConcanate",
496
+ "pos": [
497
+ 1018.2929669660746,
498
+ -43.16162642667016
499
+ ],
500
+ "size": [
501
+ 270,
502
+ 102
503
+ ],
504
+ "flags": {},
505
+ "order": 15,
506
+ "mode": 0,
507
+ "inputs": [
508
+ {
509
+ "label": "图像_1",
510
+ "name": "image1",
511
+ "type": "IMAGE",
512
+ "link": 190
513
+ },
514
+ {
515
+ "label": "图像_2",
516
+ "name": "image2",
517
+ "type": "IMAGE",
518
+ "link": 179
519
+ }
520
+ ],
521
+ "outputs": [
522
+ {
523
+ "label": "图像",
524
+ "name": "IMAGE",
525
+ "type": "IMAGE",
526
+ "links": [
527
+ 180
528
+ ]
529
+ }
530
+ ],
531
+ "properties": {
532
+ "cnr_id": "comfyui-kjnodes",
533
+ "ver": "e81f33508b0821ea2f53f4f46a833fa6215626bd",
534
+ "Node name for S&R": "ImageConcanate"
535
+ },
536
+ "widgets_values": [
537
+ "right",
538
+ true
539
+ ]
540
+ },
541
+ {
542
+ "id": 104,
543
+ "type": "LayerUtility: ImageScaleByAspectRatio V2",
544
+ "pos": [
545
+ -347.5305155140268,
546
+ -43.699890946819856
547
+ ],
548
+ "size": [
549
+ 303.68515625,
550
+ 330
551
+ ],
552
+ "flags": {},
553
+ "order": 7,
554
+ "mode": 0,
555
+ "inputs": [
556
+ {
557
+ "name": "image",
558
+ "shape": 7,
559
+ "type": "IMAGE",
560
+ "link": 173
561
+ },
562
+ {
563
+ "name": "mask",
564
+ "shape": 7,
565
+ "type": "MASK",
566
+ "link": null
567
+ }
568
+ ],
569
+ "outputs": [
570
+ {
571
+ "name": "image",
572
+ "type": "IMAGE",
573
+ "links": [
574
+ 174,
575
+ 190
576
+ ]
577
+ },
578
+ {
579
+ "name": "mask",
580
+ "type": "MASK",
581
+ "links": null
582
+ },
583
+ {
584
+ "name": "original_size",
585
+ "type": "BOX",
586
+ "links": null
587
+ },
588
+ {
589
+ "name": "width",
590
+ "type": "INT",
591
+ "links": [
592
+ 188
593
+ ]
594
+ },
595
+ {
596
+ "name": "height",
597
+ "type": "INT",
598
+ "links": [
599
+ 189
600
+ ]
601
+ }
602
+ ],
603
+ "properties": {
604
+ "cnr_id": "comfyui_layerstyle",
605
+ "ver": "c0fb64d0ebcb81c6c445a8af79ecee24bc3845b0",
606
+ "Node name for S&R": "LayerUtility: ImageScaleByAspectRatio V2"
607
+ },
608
+ "widgets_values": [
609
+ "original",
610
+ 1,
611
+ 1,
612
+ "letterbox",
613
+ "lanczos",
614
+ "16",
615
+ "longest",
616
+ 1920,
617
+ "#000000"
618
+ ],
619
+ "color": "rgba(38, 73, 116, 0.7)"
620
+ },
621
+ {
622
+ "id": 73,
623
+ "type": "LoraLoaderModelOnly",
624
+ "pos": [
625
+ 17.095629676629812,
626
+ -144.31324750664712
627
+ ],
628
+ "size": [
629
+ 260.8006286621094,
630
+ 82
631
+ ],
632
+ "flags": {},
633
+ "order": 8,
634
+ "mode": 0,
635
+ "inputs": [
636
+ {
637
+ "name": "model",
638
+ "type": "MODEL",
639
+ "link": 192
640
+ }
641
+ ],
642
+ "outputs": [
643
+ {
644
+ "name": "MODEL",
645
+ "type": "MODEL",
646
+ "links": [
647
+ 130
648
+ ]
649
+ }
650
+ ],
651
+ "properties": {
652
+ "cnr_id": "comfy-core",
653
+ "ver": "0.3.49",
654
+ "Node name for S&R": "LoraLoaderModelOnly",
655
+ "models": [
656
+ {
657
+ "name": "Qwen-Image-Lightning-8steps-V1.0.safetensors",
658
+ "url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors",
659
+ "directory": "loras"
660
+ }
661
+ ]
662
+ },
663
+ "widgets_values": [
664
+ "qwen-image/Qwen-Image-Edit-Lightning-4steps-V1.0-bf16.safetensors",
665
+ 1
666
+ ]
667
+ },
668
+ {
669
+ "id": 111,
670
+ "type": "UNETLoader",
671
+ "pos": [
672
+ -3.7152023315429688,
673
+ 62.60336685180664
674
+ ],
675
+ "size": [
676
+ 270,
677
+ 82
678
+ ],
679
+ "flags": {},
680
+ "order": 3,
681
+ "mode": 0,
682
+ "inputs": [],
683
+ "outputs": [
684
+ {
685
+ "name": "MODEL",
686
+ "type": "MODEL",
687
+ "links": [
688
+ 191
689
+ ]
690
+ }
691
+ ],
692
+ "properties": {
693
+ "cnr_id": "comfy-core",
694
+ "ver": "0.3.59",
695
+ "Node name for S&R": "UNETLoader"
696
+ },
697
+ "widgets_values": [
698
+ "qwen_image_edit_2509_fp8_e4m3fn.safetensors",
699
+ "default"
700
+ ]
701
+ },
702
+ {
703
+ "id": 101,
704
+ "type": "TextEncodeQwenImageEditPlus",
705
+ "pos": [
706
+ 351.94210712224776,
707
+ 43.102174602161476
708
+ ],
709
+ "size": [
710
+ 320.1754150390625,
711
+ 173
712
+ ],
713
+ "flags": {},
714
+ "order": 9,
715
+ "mode": 0,
716
+ "inputs": [
717
+ {
718
+ "name": "clip",
719
+ "type": "CLIP",
720
+ "link": 167
721
+ },
722
+ {
723
+ "name": "vae",
724
+ "shape": 7,
725
+ "type": "VAE",
726
+ "link": 171
727
+ },
728
+ {
729
+ "name": "image1",
730
+ "shape": 7,
731
+ "type": "IMAGE",
732
+ "link": 174
733
+ },
734
+ {
735
+ "name": "image2",
736
+ "shape": 7,
737
+ "type": "IMAGE",
738
+ "link": null
739
+ },
740
+ {
741
+ "name": "image3",
742
+ "shape": 7,
743
+ "type": "IMAGE",
744
+ "link": null
745
+ }
746
+ ],
747
+ "outputs": [
748
+ {
749
+ "name": "CONDITIONING",
750
+ "type": "CONDITIONING",
751
+ "links": [
752
+ 168
753
+ ]
754
+ }
755
+ ],
756
+ "properties": {
757
+ "cnr_id": "comfy-core",
758
+ "ver": "0.3.59",
759
+ "Node name for S&R": "TextEncodeQwenImageEditPlus"
760
+ },
761
+ "widgets_values": [
762
+ "Upscale this picture to 4K resolution.",
763
+ [
764
+ false,
765
+ true
766
+ ]
767
+ ]
768
+ },
769
+ {
770
+ "id": 112,
771
+ "type": "LoraLoaderModelOnly",
772
+ "pos": [
773
+ 10.692115938169668,
774
+ -42.901722263302254
775
+ ],
776
+ "size": [
777
+ 260.8006286621094,
778
+ 82
779
+ ],
780
+ "flags": {},
781
+ "order": 6,
782
+ "mode": 0,
783
+ "inputs": [
784
+ {
785
+ "name": "model",
786
+ "type": "MODEL",
787
+ "link": 191
788
+ }
789
+ ],
790
+ "outputs": [
791
+ {
792
+ "name": "MODEL",
793
+ "type": "MODEL",
794
+ "links": [
795
+ 192
796
+ ]
797
+ }
798
+ ],
799
+ "properties": {
800
+ "cnr_id": "comfy-core",
801
+ "ver": "0.3.49",
802
+ "Node name for S&R": "LoraLoaderModelOnly",
803
+ "models": [
804
+ {
805
+ "name": "Qwen-Image-Lightning-8steps-V1.0.safetensors",
806
+ "url": "https://huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors",
807
+ "directory": "loras"
808
+ }
809
+ ]
810
+ },
811
+ "widgets_values": [
812
+ "qwen-image/qwen_image_edit_2509_upscale.safetensors",
813
+ 1
814
+ ]
815
+ },
816
+ {
817
+ "id": 103,
818
+ "type": "LoadImage",
819
+ "pos": [
820
+ -637.1823414972478,
821
+ -31.204213495423637
822
+ ],
823
+ "size": [
824
+ 265.8947448730469,
825
+ 329.90789794921875
826
+ ],
827
+ "flags": {},
828
+ "order": 4,
829
+ "mode": 0,
830
+ "inputs": [],
831
+ "outputs": [
832
+ {
833
+ "name": "IMAGE",
834
+ "type": "IMAGE",
835
+ "links": [
836
+ 173
837
+ ]
838
+ },
839
+ {
840
+ "name": "MASK",
841
+ "type": "MASK",
842
+ "links": null
843
+ }
844
+ ],
845
+ "properties": {
846
+ "cnr_id": "comfy-core",
847
+ "ver": "0.3.50",
848
+ "Node name for S&R": "LoadImage"
849
+ },
850
+ "widgets_values": [
851
+ "20251020-164918.jpg",
852
+ "image"
853
+ ]
854
+ },
855
+ {
856
+ "id": 107,
857
+ "type": "PreviewImage",
858
+ "pos": [
859
+ 1265.9059373285024,
860
+ 181.60493213678734
861
+ ],
862
+ "size": [
863
+ 327.1199645996094,
864
+ 295.25982666015625
865
+ ],
866
+ "flags": {},
867
+ "order": 16,
868
+ "mode": 0,
869
+ "inputs": [
870
+ {
871
+ "name": "images",
872
+ "type": "IMAGE",
873
+ "link": 180
874
+ }
875
+ ],
876
+ "outputs": [],
877
+ "properties": {
878
+ "cnr_id": "comfy-core",
879
+ "ver": "0.3.50",
880
+ "Node name for S&R": "PreviewImage"
881
+ },
882
+ "widgets_values": []
883
+ }
884
+ ],
885
+ "links": [
886
+ [
887
+ 75,
888
+ 38,
889
+ 0,
890
+ 7,
891
+ 0,
892
+ "CLIP"
893
+ ],
894
+ [
895
+ 76,
896
+ 39,
897
+ 0,
898
+ 8,
899
+ 1,
900
+ "VAE"
901
+ ],
902
+ [
903
+ 125,
904
+ 66,
905
+ 0,
906
+ 3,
907
+ 0,
908
+ "MODEL"
909
+ ],
910
+ [
911
+ 128,
912
+ 3,
913
+ 0,
914
+ 8,
915
+ 0,
916
+ "LATENT"
917
+ ],
918
+ [
919
+ 130,
920
+ 73,
921
+ 0,
922
+ 66,
923
+ 0,
924
+ "MODEL"
925
+ ],
926
+ [
927
+ 142,
928
+ 7,
929
+ 0,
930
+ 3,
931
+ 2,
932
+ "CONDITIONING"
933
+ ],
934
+ [
935
+ 161,
936
+ 8,
937
+ 0,
938
+ 98,
939
+ 0,
940
+ "IMAGE"
941
+ ],
942
+ [
943
+ 162,
944
+ 99,
945
+ 0,
946
+ 3,
947
+ 4,
948
+ "INT"
949
+ ],
950
+ [
951
+ 167,
952
+ 38,
953
+ 0,
954
+ 101,
955
+ 0,
956
+ "CLIP"
957
+ ],
958
+ [
959
+ 168,
960
+ 101,
961
+ 0,
962
+ 3,
963
+ 1,
964
+ "CONDITIONING"
965
+ ],
966
+ [
967
+ 171,
968
+ 39,
969
+ 0,
970
+ 101,
971
+ 1,
972
+ "VAE"
973
+ ],
974
+ [
975
+ 173,
976
+ 103,
977
+ 0,
978
+ 104,
979
+ 0,
980
+ "IMAGE"
981
+ ],
982
+ [
983
+ 174,
984
+ 104,
985
+ 0,
986
+ 101,
987
+ 2,
988
+ "IMAGE"
989
+ ],
990
+ [
991
+ 179,
992
+ 8,
993
+ 0,
994
+ 106,
995
+ 1,
996
+ "IMAGE"
997
+ ],
998
+ [
999
+ 180,
1000
+ 106,
1001
+ 0,
1002
+ 107,
1003
+ 0,
1004
+ "IMAGE"
1005
+ ],
1006
+ [
1007
+ 183,
1008
+ 108,
1009
+ 4,
1010
+ 3,
1011
+ 3,
1012
+ "LATENT"
1013
+ ],
1014
+ [
1015
+ 188,
1016
+ 104,
1017
+ 3,
1018
+ 108,
1019
+ 0,
1020
+ "INT"
1021
+ ],
1022
+ [
1023
+ 189,
1024
+ 104,
1025
+ 4,
1026
+ 108,
1027
+ 1,
1028
+ "INT"
1029
+ ],
1030
+ [
1031
+ 190,
1032
+ 104,
1033
+ 0,
1034
+ 106,
1035
+ 0,
1036
+ "IMAGE"
1037
+ ],
1038
+ [
1039
+ 191,
1040
+ 111,
1041
+ 0,
1042
+ 112,
1043
+ 0,
1044
+ "MODEL"
1045
+ ],
1046
+ [
1047
+ 192,
1048
+ 112,
1049
+ 0,
1050
+ 73,
1051
+ 0,
1052
+ "MODEL"
1053
+ ]
1054
+ ],
1055
+ "groups": [
1056
+ {
1057
+ "id": 5,
1058
+ "title": "模型加载",
1059
+ "bounding": [
1060
+ -45.6744270324707,
1061
+ -142.2353515625,
1062
+ 359.0284729003906,
1063
+ 558.515625
1064
+ ],
1065
+ "color": "#3f789e",
1066
+ "font_size": 24,
1067
+ "flags": {}
1068
+ },
1069
+ {
1070
+ "id": 7,
1071
+ "title": "Latent",
1072
+ "bounding": [
1073
+ 323.15362445623214,
1074
+ -140.8908254283561,
1075
+ 646.9595336914062,
1076
+ 741.6004028320312
1077
+ ],
1078
+ "color": "#3f789e",
1079
+ "font_size": 24,
1080
+ "flags": {}
1081
+ },
1082
+ {
1083
+ "id": 8,
1084
+ "title": "输入端",
1085
+ "bounding": [
1086
+ -653.845947265625,
1087
+ -141.2954864501953,
1088
+ 602.335520408936,
1089
+ 553.0442260326406
1090
+ ],
1091
+ "color": "#3f789e",
1092
+ "font_size": 24,
1093
+ "flags": {}
1094
+ },
1095
+ {
1096
+ "id": 9,
1097
+ "title": "Result",
1098
+ "bounding": [
1099
+ 991.9212920698683,
1100
+ -137.03938912686516,
1101
+ 653.953369140625,
1102
+ 732.4771118164062
1103
+ ],
1104
+ "color": "#3f789e",
1105
+ "font_size": 24,
1106
+ "flags": {}
1107
+ }
1108
+ ],
1109
+ "config": {},
1110
+ "extra": {
1111
+ "ds": {
1112
+ "scale": 0.6934334949441332,
1113
+ "offset": [
1114
+ 846.697185476935,
1115
+ 446.278196771317
1116
+ ]
1117
+ },
1118
+ "frontendVersion": "1.32.9",
1119
+ "ue_links": [],
1120
+ "links_added_by_ue": [],
1121
+ "VHS_latentpreview": false,
1122
+ "VHS_latentpreviewrate": 0,
1123
+ "VHS_MetadataImage": true,
1124
+ "VHS_KeepIntermediate": true,
1125
+ "workflowRendererVersion": "LG"
1126
+ },
1127
+ "version": 0.4
1128
+ }
qwen_image_edit_2509_upscale.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1ce2406e598982aa6e6a51191c272cf6b2092909586653c6b764e6ae9d80d34
3
+ size 590057176
result/result1.png ADDED

Git LFS Details

  • SHA256: 23f93d7383159ee519b6abe2e7d7af2a56e5241e065949506d903a0def9f15d9
  • Pointer size: 132 Bytes
  • Size of remote file: 4.66 MB
result/result2.png ADDED

Git LFS Details

  • SHA256: 6f2afe6247d56eaf40398d4847fa5e993a395f548c9e26fa4e710ff541ffc2da
  • Pointer size: 132 Bytes
  • Size of remote file: 6.19 MB
result/result3.png ADDED

Git LFS Details

  • SHA256: c9d650c4e311ecc37558f4a3a239978869743356d586791f380a750b85f5459a
  • Pointer size: 132 Bytes
  • Size of remote file: 4.31 MB
result/test.jpg ADDED

Git LFS Details

  • SHA256: fbf6c0ad9e8166094aa0ec3f5e1956ec107e69e3357124f531083885f1037a88
  • Pointer size: 131 Bytes
  • Size of remote file: 295 kB