From 85149d5f54ece60aca3c90c82e82d9746e8e2951 Mon Sep 17 00:00:00 2001 From: Hamed Babaei Giglou Date: Sun, 8 Dec 2024 15:01:32 +0100 Subject: [PATCH] :memo: readme improvement --- README.md | 65 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 33 insertions(+), 32 deletions(-) diff --git a/README.md b/README.md index 1046ccf..5c3dd15 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,4 @@
- πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄Under ConstructionπŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄πŸ”΄ - - OntoAligner Logo
@@ -21,29 +18,37 @@ **OntoAligner** is a Python library designed to simplify ontology alignment and matching for researchers, practitioners, and developers. With a modular architecture and robust features, OntoAligner provides powerful tools to bridge ontologies effectively. -## Installation +## πŸ§ͺ Installation -OntoAligner is available on PyPI and can be installed with pip: +You can install **OntoAligner** from PyPI using `pip`: ```bash pip install ontoaligner ``` -Alternatively, install the latest version directly from the source: +Alternatively, to get the latest version directly from the source, use the following commands: ```bash git clone git@github.com:sciknoworg/OntoAligner.git pip install ./ontoaligner ``` - -## Documentation +## πŸ“š Documentation Comprehensive documentation for OntoAligner, including detailed guides and examples, is available at **[ontoaligner.readthedocs.io](https://ontoaligner.readthedocs.io/)**. ---- +**Tutorials** -## Quick Tour +| Example | Tutorial | Script | +|:-------------------------------|:----------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------:| +| Lightweight | [πŸ“š Fuzzy Matching](https://ontoaligner.readthedocs.io/tutorials/lightweight.html) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/fuzzy_matching.py) | +| Retrieval | [πŸ“š Retrieval Aligner](https://ontoaligner.readthedocs.io/tutorials/retriever.html) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/retriever_matching.py) | +| Large Language Models | [πŸ“š Large Language Models Aligner](https://ontoaligner.readthedocs.io/tutorials/llm.html) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/llm_matching.py) | +| Retrieval Augmented Generation | [πŸ“š Retrieval Augmented Generation](https://ontoaligner.readthedocs.io/tutorials/rag.html) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/rag_matching.py)| +| FewShot | [πŸ“š FewShot RAG](https://ontoaligner.readthedocs.io/tutorials/rag.html#fewshot-rag) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/rag_matching.py) +| In-Context Vectors Learning | [πŸ“š In-Context Vectors RAG](https://ontoaligner.readthedocs.io/tutorials/rag.html#in-context-vectors-rag) | [πŸ“ Code](https://github.com/sciknoworg/OntoAligner/blob/main/examples/icv_rag_matching.py) + +## πŸš€ Quick Tour Below is an example of using Retrieval-Augmented Generation (RAG) for ontology matching: @@ -71,16 +76,8 @@ encoder_model = ConceptParentRAGEncoder() encoded_ontology = encoder_model(source=dataset['source'], target=dataset['target']) # Step 4: Define configuration for retriever and LLM -retriever_config = { - "device": 'cuda', - "top_k": 5, -} -llm_config = { - "device": "cuda", - "max_length": 300, - "max_new_tokens": 10, - "batch_size": 15, -} +retriever_config = {"device": 'cuda', "top_k": 5,} +llm_config = {"device": "cuda", "max_length": 300, "max_new_tokens": 10, "batch_size": 15} # Step 5: Initialize Generate predictions using RAG-based ontology matcher model = MistralLLMBERTRetrieverRAG(retriever_config=retriever_config, @@ -88,16 +85,13 @@ model = MistralLLMBERTRetrieverRAG(retriever_config=retriever_config, predicts = model.generate(input_data=encoded_ontology) # Step 6: Apply hybrid postprocessing -hybrid_matchings, hybrid_configs = rag_hybrid_postprocessor( - predicts=predicts, - ir_score_threshold=0.1, - llm_confidence_th=0.8 -) +hybrid_matchings, hybrid_configs = rag_hybrid_postprocessor(predicts=predicts, + ir_score_threshold=0.1, + llm_confidence_th=0.8) evaluation = metrics.evaluation_report(predicts=hybrid_matchings, references=dataset['reference']) -print("Hybrid Matching Evaluation Report:", json.dumps(evaluation, indent=4)) -print("Hybrid Matching Obtained Configuration:", hybrid_configs) +print("Hybrid Matching Evaluation Report:", evaluation) # Step 7: Convert matchings to XML format and save the XML representation xml_str = xmlify.xml_alignment_generator(matchings=hybrid_matchings) @@ -106,18 +100,16 @@ with open("matchings.xml", "w", encoding="utf-8") as xml_file: ``` -## Contribution +## ⭐ Contribution We welcome contributions to enhance OntoAligner and make it even better! Please review our contribution guidelines in [CONTRIBUTING.md](CONTRIBUTING.md) before getting started. Your support is greatly appreciated. - - -## Contact +[//]: # (## πŸ“§ Contact) If you encounter any issues or have questions, please submit them in the [GitHub issues tracker](https://github.com/sciknoworg/OntoAligner/issues). -## Citation +## πŸ’‘ Acknowledgements If you use OntoAligner in your work or research, please cite the following: @@ -129,3 +121,12 @@ If you use OntoAligner in your work or research, please cite the following: year = {2024}, url = {https://github.com/HamedBabaei/OntoAligner}, } +``` + +

+ This software is licensed under the + MIT License. +

+ + MIT License +