Multi-image support
								1
#43 opened 4 months ago
		by
		
				
							
						monamp
	
Seeking resources to perform multimodal semantic search
#41 opened 10 months ago
		by
		
				
							
						LukaBloomRox
	
why max length is -21?????
								4
#40 opened 11 months ago
		by
		
				
							
						George-H
	
RuntimeError: shape mismatch:
π
							
						8
				
								28
#39 opened 12 months ago
		by
		
				
							
						ROSCOSMOS
	
Smaller BLIP2? 350m went missing
									2
	#37 opened about 1 year ago
		by
		
				
							
						xenophundiblum
	
Is there any param to make the captions more detailed and long?
									1
	#36 opened about 1 year ago
		by
		
				
							
						SpiderOP
	
Is there a param to make sure generate the same caption everytime?
#35 opened about 1 year ago
		by
		
				
							
						xsank
	
Fine tuning blip2 with PPO
#34 opened over 1 year ago
		by
		
				
							
						ksooklall
	
Fine tuning with LORA
								4
#33 opened over 1 year ago
		by
		
				
							
						ksooklall
	
[AUTOMATED] Model Memory Requirements
π
							
						3
				#30 opened over 1 year ago
		by
		
				
							
						model-sizer-bot
	
NaN loss when finetuning BLIP-2
								5
#28 opened over 1 year ago
		by
		
				
							
						agopalkr
	
BLIP2 for retrieval
								6
#27 opened over 1 year ago
		by
		deleted
8bit model always returns empty string
β
							
						2
				
								9
#26 opened over 1 year ago
		by
		deleted
Salesforce/blip2-opt-2.7b - Deployment in SageMaker Real time Endpoint - GPU [Solved]
π
							
						3
				
								1
#25 opened over 1 year ago
		by
		
				
							
						Gustavo-Montenegro
	
Is there any way to Blip2 on Sagemaker
								1
#24 opened almost 2 years ago
		by
		
				
							
						NaveenPanuganti
	
Adding `safetensors` variant of this model
#23 opened almost 2 years ago
		by
		
				
							
						SFconvertbot
	
Blip and Blip2 comparison?
								4
#22 opened almost 2 years ago
		by
		
				
							
						Johanderson
	
Is there a way to use ViT-L/14 from CLIP?
									2
	#20 opened almost 2 years ago
		by
		
				
							
						RfKnowledge
	
How to pass CLIP image embeddings to BLIP2 for captioning?
								3
#19 opened almost 2 years ago
		by
		
				
							
						potsu-potsu
	
[AUTOMATED] Model Memory Requirements
#17 opened almost 2 years ago
		by
		
				
							
						model-sizer-bot
	
Can this model be used for video captioning?
								2
#16 opened about 2 years ago
		by
		
				
							
						HugTibers
	
BLIP2 Always Gives `\n` as Output
								5
#15 opened about 2 years ago
		by
		
				
							
						james-passio
	
Confidence scores for image captioning?
								4
#13 opened about 2 years ago
		by
		
				
							
						acmidev
	
version misconfiguration for sagemaker
π
							
						1
				#12 opened over 2 years ago
		by
		
				
							
						marcinp
	
Invoking SageMaker endpoint with BLIP2 model?
π
							
						4
				#10 opened over 2 years ago
		by
		
				
							
						CowboyWay
	
Question about decoding
								1
#9 opened over 2 years ago
		by
		
				
							
						babyta
	
Google Colab (Free) Crash due to not enough memory
								1
#8 opened over 2 years ago
		by
		
				
							
						masoudkaviani
	
Train with different language model in BLIP2
#7 opened over 2 years ago
		by
		
				
							
						Upyaya
	
Inference API usage
								1
#4 opened over 2 years ago
		by
		
				
							
						robertwolf
	
Add zero-shot classification task for BLIP-2
π
							
						1
				
								2
#3 opened over 2 years ago
		by
		
				
							
						youssefadarrab
	
How to use BLIP 2.0
								5
#2 opened over 2 years ago
		by
		
				
							
						matheusdias
	
ITM Q-Former
#1 opened over 2 years ago
		by
		
				
							
						neromule