Prithiv Sakthi PRO

prithivMLmods

AI & ML interests

Computer Vision - AL/ML

Articles

Organizations

prithivMLmods's activity

replied to their post about 10 hours ago
posted an update 1 day ago
view post
Post
1331
I am experimenting with the Flux-Realism and Flux-Anime LoRA models, using the Flux.1-dev & schnell models as the base. The desired results improve significantly as the image lengths increase. 🎈

The demo for the respective trials is :\
- prithivMLmods/FLUX-REALISM
- prithivMLmods/FLUX-ANIME

Model :\
- prithivMLmods/Canopus-LoRA-Flux-FaceRealism
- prithivMLmods/Canopus-LoRA-Flux-Anime

Dataset:\
- prithivMLmods/Canopus-Realism-Minimalist
- https://4kwallpapers.com
  • 2 replies
Β·
replied to victor's post 28 days ago
view reply

I think a concrete example would help HF deal with this. (I hope you don't mean me...🀒)

Hi!! @John6666 , I didn't mean you at all. I was referring to people posting 'Not Safe for Work' content / link attached to the Spaces. [They even countered by arguing that it was a Hugging Face feature.]

replied to victor's post 28 days ago
view reply
  • I frequently see random people duplicating top-trending spaces and promoting illegal ads and unethical activities with links to sites. It would be beneficial to restrict these actions by issuing warnings when they attempt to commit or upload files. [ PS: I still come across it. ]

  • Additionally, implementing chat support within Hugging Face would be valuable. This feature could provide knowledge and guidance for those who are just starting to build today, helping them navigate the platform and use its tools effectively.

  • [ Activity Overview ] for Users.

posted an update 30 days ago
posted an update about 1 month ago
view post
Post
2098
✨The STABLE IMAGINE !!✨
🍺Space: prithivMLmods/STABLE-IMAGINE
↗️The specific LoRA in the space that requires appropriate trigger words brings good results.
πŸ“’ Articles: https://huggingface.co/blog/prithivMLmods/lora-adp-01

**Description and Utility Functions **
βœ… Most likely image generation
β˜‘οΈ Most accurate trigger words expected
βœ… Each designed to capture different artistic elements
β˜‘οΈ Specialized styles and characteristics
βœ… Flexible to design what is needed (keyword-centric)
β˜‘οΈ Increasing productivity

πŸ«™Repository: https://github.com/prithivsakthiur/gen-vision
πŸ“”Colab Link: https://colab.research.google.com/drive/1axA0pU--32t4a8AHiVlt6zyl8gRfiXKs
*️⃣Notebook: prithivMLmods/STABLE-IMAGINE

lora_options = {
        "Realism (face/character)": ("prithivMLmods/Canopus-Realism-LoRA", "Canopus-Realism-LoRA.safetensors", "rlms"),
        "Pixar (art/toons)": ("prithivMLmods/Canopus-Pixar-Art", "Canopus-Pixar-Art.safetensors", "pixar"),
        "Photoshoot (camera/film)": ("prithivMLmods/Canopus-Photo-Shoot-Mini-LoRA", "Canopus-Photo-Shoot-Mini-LoRA.safetensors", "photo"),
        "Clothing (hoodies/pant/shirts)": ("prithivMLmods/Canopus-Clothing-Adp-LoRA", "Canopus-Dress-Clothing-LoRA.safetensors", "clth"),
          }
          .
          .
          .
for model_name, weight_name, adapter_name in lora_options.values():
    pipe.load_lora_weights(model_name, weight_name=weight_name, adapter_name=adapter_name)
pipe.to("cuda")



  • 1 reply
Β·
replied to their post about 2 months ago
view reply

Hi @AleksPokd , I will work on the idea after completing my ongoing stuffs.

Thankyou for the idea, we might collab πŸ™‚

posted an update 2 months ago
view post
Post
3403
πŸ”΄β­ New addition to the existing concept space! πŸ”΄β­

🏞️ Space: prithivMLmods/IMAGINEO-4K

πŸš€ Tried the Duotone Canvas with the image generator. Unlike the duotone filter in the Canva app, which applies hue and tints in RGBA, this feature applies duotones based purely on the provided prompt to personalize the generated image.

πŸš€ These tones also work with the gridding option, which already exists in the space.

πŸš€ The application of tones depends on the quality and detail of the prompt given. The palette may be distorted in some cases.

πŸš€It doesn't apply like a hue or tint in RGBA (as shown in canva app below); it is purely based on the prompts passed.

🏞️ Check out the space: prithivMLmods/IMAGINEO-4K
🏜️Collection: https://huggingface.co/collections/prithivMLmods/collection-zero-65e48a7dd8212873836ceca2

huggingface.co/spaces/prithivMLmods/IMAGINEO-4K

🏞️What you can do with this space:
βœ… Compose Image Grid
πŸ‘‰πŸ» "2x1", "1x2", "2x2", "2x3", "3x2", "1x1"
βœ… Apply styles
βœ… Set up Image tones
βœ… Apply filters & adjust quality

.
.
.
Thanks for reading!
- @prithivMLmods
  • 1 reply
Β·
replied to their post 2 months ago
view reply

Hi @AleksPokd .

Can you describe this fully "" it would be fantastic to consider adding the Image Reference or Character Reference feature"". So it would be better to move further.. text-img- to - img-img ??

replied to GeorgeosDiazMontexano's post 2 months ago
view reply

It is derived from the base models of SDXL 1.0, JuggernautXL, and Epic Realism models with Auto Labeling Algorithms --wd-v1-4-vit-tagger-v2-- and --wd-v1-4-convnext-tagger-v2--, or you can manually do the batch labeling using Automatic1111 on RunPod, Tensor Arts, and more. There is no specific training for diffusers.

Regarding this question: Is it possible to extract the text model to apply the deltas to the Mistral model? Did they use a Mistral base, as it has 32 layers (31 transformer layers plus the LM head)?

--Transferring fine-tuned results from one model to another can be done, but I have no idea about the text model to apply the deltas to the Mistral model. Did they use a Mistral base, as it has 32 layers (31 transformer layers plus the LM head)?

---If you have any research about it ?

replied to GeorgeosDiazMontexano's post 2 months ago
replied to GeorgeosDiazMontexano's post 2 months ago
view reply

Hhoo !!, Okay Man ... .πŸ‘πŸ»

Have a Great Day !!

replied to GeorgeosDiazMontexano's post 2 months ago
view reply

Hmm commercially you were right β˜‘οΈ, different approaches arise from different strategies. The approach of using more advanced, cost-effective cloud solutions for deployment aligns well with many development practices. It's essential to balance performance needs with cost considerations to optimize your AI model deployment strategy. Let the environment will decide !!

PS: please remove the ** Not-For-All-Audiences** from your recent models, if it is not really belongs NFAA / NSFW @LeroyDyer , because it may not reach the correct people for future enhancements. If it is really belong to there leave as it is.

replied to GeorgeosDiazMontexano's post 2 months ago
view reply

Colab has always been great, no backwords.

posted an update 2 months ago
view post
Post
3252
🚨 New Release: ultralytics8.2.51
🍺Live Space : prithivMLmods/YOLO-VIDEO , Duplicate the Space to avoid queuing issues.
🍺T4 Colab : https://colab.research.google.com/drive/1BKgFUfk2Me1cSPFmbtZSVCn_4cYImPO-?au
πŸ‘‰πŸ»For HPC, use A100/T4 under controlled conditions.
πŸ‘‰πŸ»Speed Estimation, Object Counting, Distance Calculation, Workout Monitoring, Heatmaps,etc.

Ultralytics dropped the YOLOv8 - #Ultralytics 8.2.51 πŸ”₯, YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

πŸ”— https://pypi.org/project/ultralytics/8.2.58/

πŸš€More Features You can try:
βœ… Classes selection support added
βœ… Live FPS display in the sidebar
βœ… Webcam and video support added
βœ… Confidence and NMS threshold option to modify.
βœ… Segmentation, detection, and pose models support added.

πŸ™€Ultralytics Live inference: https://docs.ultralytics.com/guides/streamlit-live-inference/
from ultralytics import solutions
solutions.inference()
### Make sure to run the file using command `streamlit run <file-name.py>`

⚑yolo streamlit-predict

πŸ‘‰πŸ»Advantages of Live Inference

β˜‘οΈ Seamless Real-Time Object Detection: Streamlit combined with YOLOv8 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
β˜‘οΈEfficient Resource Utilization: YOLOv8 optimized algorithm ensure high-speed processing with minimal computational resources.
πŸ™€Ultralytics feature Models: https://docs.ultralytics.com/models/, Ultralytics new Solutions: https://docs.ultralytics.com/solutions/

πŸ‘‰πŸ»Official Documentation:
Ultralytics YOLOv8 Documentation: Refer to the official YOLOv8 documentation for comprehensive guides and insights on various computer vision tasks and projects. πŸ”— https://docs.ultralytics.com/
replied to GeorgeosDiazMontexano's post 2 months ago
view reply

@GeorgeosDiazMontexano It wasn't like that, sir. Since A100 and T4 are performance and acceleration-centric GPUs, the usability of GPUs has always been tied to their price tags. However, in Hugging Face, you can use A100, T4, and upgraded CPUs. If you need to build something performance-centric, you need GPUs, right? In that case, paying for a Pro Subscription with zero GPUs (HPC) will definitely be useful per month compared to other resource costs per hour.

All the best,
PrithivSakthi

replied to GeorgeosDiazMontexano's post 2 months ago
view reply

Hi! @GeorgeosDiazMontexano ,

It means that GPU allocation is dynamic and varies per user, with each visitor/user having a quota that gradually resets over time.

So the A100 or other GPU error you are facing will reset after the time limit shown in the error message.

Some spaces consumes a high amount of GPU resources and may lead to the depletion of your GPU quota.

replied to their post 3 months ago
view reply

Hi @ezzdev This was a demo space for the computer vision models. You can use images in your project. Generating images out of ethicalness is subject to your own risk ( not safe for work ).

replied to their post 3 months ago
view reply

Three of them with Tensor Art & remaining with the SD in RUNPOD. Using it with the appropriate base model will give the good results. But it was in initial training stage, better may be come on future ..

posted an update 3 months ago
view post
Post
4799
Hey guys! @mk230580 @wikeeyang @Yasirkh & others asked how to run the Hugging Face spaces outside of the HF environment locally with their source editor and Google Colab. Here is how to do that simply πŸ‘‡πŸ‘‡.

πŸ“I have just created a step-by-step procedure with a Colab demo link also attached in the repository's README.md.

πŸ”—: https://github.com/prithivsakthiur/how-to-run-huggingface-spaces-on-local-machine-demo

Thanks for a read !!
Β·
replied to their post 3 months ago
replied to their post 3 months ago
view reply

You have mentioned, you have experience with JuggernautXL, Rav Animated, SDXL, Realistic Vision, DreamShaper, and Lora. to generate images in Local & also mentioned GTX 3060 you hold for processing / accelerating !!.

Collection Zero is Zero GPU Nvidia A100 HCP Running Spaces & DALLE 4K, MidJourney are the Quick Names, i had just keep it for an Trend.

⭐Kindly Please provide Clearly, What i need to do for you / Guide you. Since you had mentioned experiences with Automatic 11111 or ComfyUI on your my local Hardware....

So You trying to run spaces in your local hardware / something else ??.

replied to their post 3 months ago
view reply

Hi @mk230580 !!

You are asked for the image that has high res quality with the fast computation, for your instance i came up with the idea in acceleration of GPU T4, Yes we know NVIDIA A100 TC GPU is unmatched in its power and highest- performing elastic computations (HPC) tasks. Apart from that you can use T4 as hardware accelerators. You asked me how to run externally from hugging face right. Use T4 in Google Colab or any other work spaces that compatible of it. A100 is also available in Colab but you be a premium user.

Running in the local system same as follows

Just have the HF Token to passed for login...

--Authenticate with Hugging Face
from huggingface_hub import login

--Log in to Hugging Face using the provided token
hf_token = '---pass your huging face access token---'
login(hf_token)

Visit my colab space for an instances to run local out of HF
Hardware accelerator : T4 GPU
See we know, we can get A100, L4 there in colab for premium / for cost. T4 is free for certain amount of computation i went with it . In local hardware you know what to do...

Second thing: the amount details you have in prompt also will have the desired results. see the higher-end details prompts via https://huggingface.co/spaces/prithivMLmods/Top-Prompt-Collection or in freeflo.ai, prompthero for better details results.

Colab link ( example of the stabilityai/sdxl-turbo) :
https://colab.research.google.com/drive/1zYj5w0howOT3kiuASjn8PnBUXGh_MvhJ#scrollTo=Ok9PcD_kVwUI
You can use the various model like RealVisXL_V4.0, Turbo, & more for better results
** After passing the Access Token, Remove your token to share for others**

replied to their post 3 months ago
view reply

Hi Yasirkh,

Yes!, you can run any Python-based SDK in Google Colab with the appropriate model and its api_url by using the correct request library. For example, if you are trying to run a text-to-image model, you can do it with the inference API.

⚠️For example:

import requests
API_URL = "----------your api addr goes here---------"
headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
--You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))

⚠️You can find your access token on hf settings >> access token, replace it on this "headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}" instead of x

⚠️Example: headers = {"Authorization": "Bearer hf_ABC1234567890xyz9876543210"}, install the required PyPI Libs.

πŸš€Then add the Gradio blocks what you need to perform in the interface func().

πŸš€ For your information, this is not the original MidJourney model; I have named the space "MidJourney" to perform the same work similarly. Make try & let me know you got it or else. And one more you cannot commit / push the code when the access tokens are visible use secret key or variables ( when in repos).

⚠️If you feel any difficulties, you can reply me again ; i will sure help you with the logic or i will share the colab work link for make the case easier.

------ Try in Google Colab / Jupyter Nb / Data Spell / even in VsCode were you find the easier way ------

-Thank You !

posted an update 4 months ago
view post
Post
4751
Hey Guys !! πŸ§‹

This is the time to Share the Collection of Prompts which have high parametric details to produce the most detailed flawless images.

πŸ”—You can watch out the Collection on: prithivMLmods/Top-Prompt-Collection

πŸ”’More than 200+ High Detailed prompts have been used in the Spaces.
@prithivMLmods

Thank you for the read. !!
Β·
posted an update 4 months ago
posted an update 5 months ago