sentence-transformers
https://github.com/ukplab/sentence-transformers
Python
Sentence Embeddings with BERT & XLNet
Triage Issues!
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues.
Triage Docs!
Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.
Python not yet supported13 Subscribers
Add a CodeTriage badge to sentence-transformers
Help out
- Issues
- RuntimeError: Unable to find data type for weight_name='/encoder/layer.0/attention/output/dense/MatMul_output_0'. shape_inference failed to return a type probably this node is from a different domain or using an input produced by such an operator. This may happen if you quantize a model already quantized. You may use extra_options `DefaultTensorType` to indicate the default weight type, usually `onnx.TensorProto.FLOAT`.
- Can longer sequences be encoded? Are the encodings good?
- What is the maximum number of sentences that a fast cluster can cluster?
- Last Token Embedding not matching
- TypeError: T5EncoderModel.forward() got an unexpected keyword argument 'token_type_ids'
- Implementing Embedding Quantization for Dynamic Serving Contexts
- Problem installing sentence transformers in jupyter notebook.
- Cuda out of memory, asking about quantization
- INSTRUCTOR models not working with sentence-transformers via langchain
- Update SentenceTransformer.py to use token length for sorting
- Docs
- Python not yet supported