Hi, are there any tests or plans to integrate some CLIP models for natural language subject search or similarity search across media? (imgs, videos)
Tools like VectorChord use machine learning models like CLIP to provide relevant search results, allowing for free form searches without requiring specific keywords in the image or video metadata.
Natural language subject search?
Ideas for improvements and requests for new features in XnView MP
Jump to
- General
- ↳ Info & Forum
- ↳ XnView - FAQ
- ↳ Miscellaneous
- XnView MP
- ↳ MP - Announcements
- ↳ MP - General Support
- ↳ MP - Bug reports
- ↳ New
- ↳ Reproduced - TODO
- ↳ Fixed in next version
- ↳ Retest
- ↳ Postponed
- ↳ Closed/Resolved
- ↳ MP - Suggestions
- ↳ MP - General [Français]
- XnView Classic
- ↳ Classic - General Support
- ↳ Classic - Bug Reports
- ↳ Classic - Suggestions
- ↳ Classic - General [Français]
- XnView Family
- ↳ XnConvert
- ↳ XnResize
- ↳ XnRetro, XnSketch, ...
- ↳ NConvert
- ↳ XnView Shell Extension
- ↳ Android & iOS: XnPhotoFx, XnRetro, XnSketch, ...
- ↳ XnView Pocket
- ↳ GFL SDK
- Contribution
- ↳ Customization
- ↳ Documentation
- ↳ XnView MP - Translation
- ↳ XnView Classic - Translation
- ↳ Testing - Closed/Solved
- ↳ 1.96 Testing - Feedback
- Archive
- ↳ XnView Un*x
- ↳ XnView Mac OS X
- ↳ Classic - Resolved Bugs & Requests
- ↳ MP - Resolved Bugs & Requests
- ↳ MP – Beta Testing
- ↳ New