Dictionary - Application focused on word search through web scraping

Overview

Dictionary

GeimerDroiid | Discord GeimerDroiid | Spotify GeimerDroiid | Github Email | jmanuelhv9@gmail.com

About

Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

Dictionary | GUI Dictionary | GUI

What's new in v1.5

  • Interface improvements

    Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

  • Bugs fixed

    Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

  • Code improvement

    I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

  • The dictation function has been disabled

    I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.


Functions

  • Dictation

    The dictation function listens and converts your voice into text that will be entered into the search bar of the application, thanks to this you can apply some other function to that text. For this function I have used the SpeechRecognition library that allows us to use the microphone of our computer to convert audio to text. All the code is in the file spelling.py

  • Spelling

    The spelling function breaks the sentence into words and spells it letter by letter, and when it reaches the end of a word, it spells it out in full

  • Syllables

    Syllables function has a menu containing all the conjugations of letters and syllables together with their respective sounds.

  • Meaning

    This function by means of websracping looks up the meaning of a word in the DEM dictionary and tells us its meaning with its respective examples, although if it does not find it, it tells you search alternatives. For this function I used the BeautifulSoup4 library for web scraping as well as pyttsx3 to convert text to audio.


Requirements

  • It is important not to delete the executable file from the folder, as this will cause errors. The best option is to create a shortcut and move it to the desktop or anywhere else you want to place it.

  • To have a good performance of the application I recommend downloading "Microsoft Sabina Desktop - Spanish (Mexico)" which is a voice provided by Microsoft for the devices.

How to download "Microsoft Sabina Desktop - Spanish (Mexico)".

In order to download the necessary voice for the program, the first thing to do is to go to:

Settings> Time and language> Voice> Manage voices> Add voices

In the search bar type Spanish and download the one that says "Spanish (Mexico)". And with that, everything would be ready to use the application correctly and avoid any pronunciation error.

If you wish to contribute to the development of the application:

  • First clone the repository

      git clone https://github.com/GeimerDroiid/Dictionary.git
    
  • Then create a branch with your user name

      git checkout -b 
         
    
         
  • And finally install the requirements

      py pip install -r requirements.txt
    

Contribution

Pull requests are welcome, I would appreciate your support to contribute to a better development of this application. For major changes, please open an issue to discuss what you would like to change.
You might also like...
Web Scraping Practica With Python

Web-Scraping-Practica Integrants: Guillem Vidal Pallarols. Lídia Bandrés Solé Fitxers: Aquest document és el primer que trobem. A continuació trobem u

Here I provide the source code for doing web scraping using the python library, it is Selenium.
Here I provide the source code for doing web scraping using the python library, it is Selenium.

Here I provide the source code for doing web scraping using the python library, it is Selenium.

Consulta de CPF e CNPJ na Receita Federal com Web-Scraping

Repositório contendo scripts Python que realizam a consulta de CPF e CNPJ diretamente no site da Receita Federal.

A package that provides you Latest Cyber/Hacker News from website using Web-Scraping.

cybernews A package that provides you Latest Cyber/Hacker News from website using Web-Scraping. Latest Cyber/Hacker News Using Webscraping Developed b

Web Scraping OLX with Python and Bsoup.
Web Scraping OLX with Python and Bsoup.

webScrap WebScraping first step. Authors: Paulo, Claudio M. First steps in Web Scraping. Project carried out for training in Web Scrapping. The export

Demonstration on how to use async python to control multiple playwright browsers for web-scraping

Playwright Browser Pool This example illustrates how it's possible to use a pool of browsers to retrieve page urls in a single asynchronous process. i

Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

This is a module that I had created along with my friend. It's a basic web scraping module
This is a module that I had created along with my friend. It's a basic web scraping module

QuickInfo PYPI link : https://pypi.org/project/quickinfo/ This is the library that you've all been searching for, it's built for developers and allows

A simple django-rest-framework api using web scraping

Apicell You can use this api to search in google, bing, pypi and subscene and get results Method : POST Parameter : query Example import request url =

Releases(v1.5)
  • v1.5(Jan 3, 2022)

    What's new in v1.5

    • Interface improvements

      Better interface with buttons and colors that contrast better with each other as well as better typography, more minimalist animations for a better user experience.

    • Bugs fixed

      Correction of errors mainly of grammar among the most outstanding is the elimination of "Gua" and "Guo" since that conjugation of letters does not belong to the grammar of the Spanish language. Also improvement in the application startup time.

    • Code improvement

      I have focused on the almost total reconstruction of the application so all the code is new, I have looked for the way to preserve the readability of the same for it I have divided each function in different files. Besides looking for the most efficient and easy way to do each one (All the code is in English).

    • The dictation function has been disabled

      I have decided to disable the dictation feature in the final version, as it gave me a lot of problems when packaging the application, so I decided to keep it disabled until I find a way to build this feature and have as few bugs as possible as well as a proper functioning.

    Full Changelog: https://github.com/DawntDev/Dictionary/compare/v1.0...v1.5

    Source code(tar.gz)
    Source code(zip)
    Dictionary.1.5.zip(75.35 MB)
  • v1.0(Jan 3, 2022)

    About

    Application focused on searching the meaning of words through web scraping, besides having more functions such as Dictation, Spelling and Syllables.
    I created this application as a way to test the knowledge that I have started to acquire so I decided to make this dictionary with some basic functions like spelling but from there more ideas came up, like implementing a method that would tell me the meanings of the words that I didn't understand, or a way in which I didn't have to write the word and just by telling the computer I could write it. When I created this application I was just starting to learn Python (it is the language I used for this application) so I may have a lot of bad practices in the code that I am correcting for future versions. During the creation of this application I learned how to make user interfaces, I dabbled a bit in web scraping and besides investigating a method with which I can change text to sound and play it also at the end I used object oriented programming to facilitate the creation of the interface.

    Full Changelog: https://github.com/DawntDev/Dictionary/commits/v1.0

    Source code(tar.gz)
    Source code(zip)
    dictionary.exe(50.35 MB)
Owner
Juan Manuel
Juan Manuel
Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Docker containerized Python Flask API that uses selenium to scrape and interact with websites

Christian Gracia 0 Jan 22, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
a high-performance, lightweight and human friendly serving engine for scrapy

a high-performance, lightweight and human friendly serving engine for scrapy

Speakol Ads 30 Mar 01, 2022
Goblyn is a Python tool focused to enumeration and capture of website files metadata.

Goblyn Metadata Enumeration What's Goblyn? Goblyn is a tool focused to enumeration and capture of website files metadata. How it works? Goblyn will se

Gustavo 46 Nov 22, 2022
This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

This is a simple website crawler which asks for a website link from the user to crawl and find specific data from the given website address.

Faisal Ahmed 1 Jan 10, 2022
让中国用户使用git从github下载的速度提高1000倍!

序言 github上有很多好项目,但是国内用户连github却非常的慢.每次都要用插件或者其他工具来解决. 这次自己做一个小工具,输入github原地址后,就可以自动替换为代理地址,方便大家更快速的下载. 安装 pip install cit 主要功能与用法 主要功能 change 将目标地址转换为

35 Aug 29, 2022
Webservice wrapper for hhursev/recipe-scrapers (python library to scrape recipes from websites)

recipe-scrapers-webservice This is a wrapper for hhursev/recipe-scrapers which provides the api as a webservice, to be consumed as a microservice by o

1 Jul 09, 2022
Google Scholar Web Scraping

Google Scholar Web Scraping This is a python script that asks for a user to input the url for a google scholar profile, and then it writes publication

Suzan M 1 Dec 12, 2021
京东抢茅台,秒杀成功很多次讨论,天猫抢购,赚钱交流等。

Jd_Seckill 特别声明: 请添加个人微信:19972009719 进群交流讨论 目前群里很多人抢到【扫描微信添加群就好,满200关闭群,有喜欢薅信用卡羊毛的也可以找我交流】 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性

50 Jan 05, 2023
Web scraping library and command-line tool for text discovery and extraction (main content, metadata, comments)

trafilatura: Web scraping tool for text discovery and retrieval Description Trafilatura is a Python package and command-line tool which seamlessly dow

Adrien Barbaresi 704 Jan 06, 2023
Scrape all the media from an OnlyFans account - Updated regularly

Scrape all the media from an OnlyFans account - Updated regularly

CRIMINAL 3.2k Dec 29, 2022
A social networking service scraper in Python

snscrape snscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the disco

2.4k Jan 01, 2023
A dead simple crawler to get books information from Douban.

Introduction A dead simple crawler to get books information from Douban. Pre-requesites Python 3 Install dependencies from requirements.txt (Optional)

Yun Wang 1 Jan 10, 2022
Binance Smart Chain Contract Scraper + Contract Evaluator

Pulls Binance Smart Chain feed of newly-verified contracts every 30 seconds, then checks their contract code for links to socials.Returns only those with socials information included, and then submit

14 Dec 09, 2022
Open Crawl Vietnamese Text

Open Crawl Vietnamese Text This repo contains crawled Vietnamese text from multiple sources. This list of a topic-centric public data sources in high

QAI Research 4 Jan 05, 2022
Find thumbnails and original images from URL or HTML file.

Haul Find thumbnails and original images from URL or HTML file. Demo Hauler on Heroku Installation on Ubuntu $ sudo apt-get install build-essential py

Vinta Chen 150 Oct 15, 2022
Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a bot

Aliexpress to telegram post Python script that reads Aliexpress offers urls from a Excel filename (.csv) and post then in a Telegram channel using a b

Fernando 6 Dec 06, 2022
Web Scraping Framework

Grab Framework Documentation Installation $ pip install -U grab See details about installing Grab on different platforms here http://docs.grablib.

2.3k Jan 04, 2023
一些爬虫相关的签名、验证码破解

cracking4crawling 一些爬虫相关的签名、验证码破解,目前已有脚本: 小红书App接口签名(shield)(2020.12.02) 小红书滑块(数美)验证破解(2020.12.02) 海南航空App接口签名(hnairSign)(2020.12.05) 说明: 脚本按目标网站、App命

XNFA 90 Feb 09, 2021
Collection of code files to scrap different kinds of websites.

STW-Collection Scrap The Web Collection; blog posts. This repo contains Scrapy sample code to scrap the following kind of websites: Do you want to lea

Tapasweni Pathak 15 Jun 08, 2022