构建一个多源(公众号、RSS)、干净、个性化的阅读环境

Overview

2C

构建一个多源(公众号、RSS)、干净、个性化的阅读环境

作为一名微信公众号的重度用户,公众号一直被我设为汲取知识的地方。随着使用程度的增加,相信大家或多或少会有一个比较头疼的问题——广告问题

假设你关注的公众号有十来个,若一个公众号两周接一次广告,理论上你会面临二十多次广告,实际上会更多,运气不好的话一天刷下来都是广告也不一定。若你关注了二三十个公众号,那很难避免现阶段公众号环境的广告轰炸。

更可恶的是,大部分的广告,无不是贩卖焦虑,营造消极气氛,实在无法忍受且严重影响我的心情。但有些公众号写的文章又确实不错,那怎么做可以不看广告只看文章呢?如果你在公众号阅读体验下深切感受到对于广告的无奈,那么这个项目就是你需要的。

这就是本项目的产生的原因,构建一个多源(公众号、RSS)、干净、个性化的阅读环境

PS: 这里声明一点,看广告是对作者的支持,这样一定程度上可以促进作者更好地产出。但我看到喜欢的会直接打赏支持,所以搭便车的言论在我这里站不住脚,谢谢。

实现

我的思路很简单,大概流程如下:

2c_process

简单解释一下:

  • 采集器:监控各自关注的公众号或者博客源,最终构建Feed流作为输入源;
  • 分类器(广告):基于历史广告数据,利用机器学习实现一个广告分类器(可自定义规则),然后给每篇文章自动打上标签再持久化到MongoDB
  • 分发器:依靠接口层进行数据请求&响应,为使用者提供个性化配置,然后根据配置自动进行分发,将干净的文章流向微信、钉钉、TG甚至自建网站都行。

这样做就实现了干净阅读环境的构建,衍生一下,还可以实现个人知识库的构建,可以做诸如标签管理、图谱构建等,这些都可以在接口层进行实现。

实现详情可参考文章[打造一个干净且个性化的公众号阅读环境]

使用

本项目使用 pipenv 进行项目管理, 安装使用过程如下:

# 确保有Python3.6+环境
git clone https://github.com/howie6879/2c.git
cd 2c

# 创建基础环境
pipenv install --python={your_python3.6+_path}  --skip-lock --dev
# 配置.env 具体查看 doc/00.环境变量.md
# 启动
pipenv run dev

使用前建议阅读文档:

帮助

为了提升模型的识别准确率,我希望大家能尽力贡献一些广告样本,请看样本文件:.files/datasets/ads.csv,我设定格式如下:

title url
广告文章标题 广告文章连接

来个实例:

ads_demo

一般广告会重复在多个公众号投放,填写的时候麻烦查一下是否存在此条记录,真的真的希望大家能一起合力贡献,亲,来个PR贡献你的力量吧!

致谢

非常感谢以下项目:

感谢以下开发者的贡献(排名不分先后):

关于

欢迎与我交流(关注入群):

img
Comments
  • 使用 docker 一键安装,运行报错 ERROR Liuli 执行失败!'doc_source'

    使用 docker 一键安装,运行报错 ERROR Liuli 执行失败!'doc_source'

    运行日志如下,请问这是啥问题。

    [2022:02:18 10:51:54] INFO  Liuli Schedule(v0.2.1) task([email protected]_team) started successfully :)
    
    [2022:02:18 10:51:54] INFO  Liuli Task([email protected]_team) schedule time:
    
     00:10
    
     12:10
    
     21:10
    
    [2022:02:18 10:51:54] ERROR Liuli 执行失败!'doc_source'
    opened by GuoZhaoHui628 24
  • 带有空格的公众号采集总是失败

    带有空格的公众号采集总是失败

    [2022:05:27 08:11:47] INFO Request <GET: https://weixin.sogou.com/weixin?type=1&query=丁爸20%情报分析师的工具箱&ie=utf8&s_from=input&sug=n&sug_type=> liuli_schedule | [2022:05:27 08:11:48] ERROR SGWechatSpider <Item: Failed to get target_item's value from html.> liuli_schedule | Traceback (most recent call last): liuli_schedule | File "/root/.local/share/virtualenvs/code-nY5aaahP/lib/python3.9/site-packages/ruia/spider.py", line 197, in _process_async_callback liuli_schedule | async for callback_result in callback_results: liuli_schedule | File "/data/code/src/collector/wechat/sg_ruia_start.py", line 58, in parse liuli_schedule | async for item in SGWechatItem.get_items(html=html): liuli_schedule | File "/root/.local/share/virtualenvs/code-nY5aaahP/lib/python3.9/site-packages/ruia/item.py", line 127, in get_items liuli_schedule | raise ValueError(value_error_info) liuli_schedule | ValueError: <Item: Failed to get target_item's value from html.>

    bug 
    opened by hackdoors 7
  • liuli_schedule exited with code 0

    liuli_schedule exited with code 0

    根据https://mp.weixin.qq.com/s/rxoq97YodwtAdTqKntuwMA的提示进行安装。

    实际文件和代码如下:

    pro.env文件的内容:

    PYTHONPATH=${PYTHONPATH}:${PWD}
    LL_M_USER="liuli"
    LL_M_PASS="liuli"
    LL_M_HOST="liuli_mongodb"
    LL_M_PORT="27017"
    LL_M_DB="admin"
    LL_M_OP_DB="liuli"
    LL_FLASK_DEBUG=0
    LL_HOST="0.0.0.0"
    LL_HTTP_PORT=8765
    LL_WORKERS=1
    # 上面这么多配置不用改,下面的才需要各自配置
    # 请填写你的实际IP
    LL_DOMAIN="http://172.17.0.1:8765"
    # 请填写微信分发配置
    LL_WECOM_ID="自定义"
    LL_WECOM_AGENT_ID="自定义"
    LL_WECOM_SECRET="自定义"
    

    default.json的内容如下:

    {
        "name": "default",
        "author": "liuli_team",
        "collector": {
            "wechat_sougou": {
                "wechat_list": [
                    "老胡的储物柜"
                ],
                "delta_time": 5,
                "spider_type": "playwright"
            }
        },
        "processor": {
            "before_collect": [],
            "after_collect": [{
                "func": "ad_marker",
                "cos_value": 0.6
            }, {
                "func": "to_rss",
                "link_source": "github"
            }]
        },
        "sender": {
            "sender_list": ["wecom"],
            "query_days": 7,
            "delta_time": 3
        },
        "backup": {
            "backup_list": ["mongodb"],
            "query_days": 7,
            "delta_time": 3,
            "init_config": {},
            "after_get_content": [{
                "func": "str_replace",
                "before_str": "data-src=\"",
                "after_str": "src=\"https://images.weserv.nl/?url="
            }]
        },
        "schedule": {
            "period_list": [
                "00:10",
                "12:10",
                "21:10"
            ]
        }
    }
    

    docker-compose.yml文件的内容如下:

    version: "3"
    services:
      liuli_api:
        image: liuliio/api:v0.1.3
        restart: always
        container_name: liuli_api
        ports:
          - "8765:8765"
        volumes:
          - ./pro.env:/data/code/pro.env
        depends_on:
          - liuli_mongodb
        networks:
          - liuli-network
      liuli_schedule:
        image: liuliio/schedule:v0.2.4
        restart: always
        container_name: liuli_schedule
        volumes:
          - ./pro.env:/data/code/pro.env
          - ./liuli_config:/data/code/liuli_config
        depends_on:
          - liuli_mongodb
        networks:
          - liuli-network
      liuli_mongodb:
        image: mongo:3.6
        restart: always
        container_name: liuli_mongodb
        environment:
          - MONGO_INITDB_ROOT_USERNAME=liuli
          - MONGO_INITDB_ROOT_PASSWORD=liuli
        ports:
          - "27027:27017"
        volumes:
          - ./mongodb_data:/data/db
        command: mongod
        networks:
          - liuli-network
    
    networks:
      liuli-network:
        driver: bridge
    

    报错内容如下:

    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule  | Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py
    liuli_schedule  | Loading .env environment variables...
    liuli_schedule exited with code 0
    

    我感觉是python路径的问题。我的python路径是:

    which python3 # /usr/bin/python3
    

    我的VPS中没有${PYTHONPATH}这个系统变量:

    echo ${PYTHONPATH} # NULL
    

    请问大佬,我应该如何改正?

    opened by huangwb8 7
  • Liuli 项目需要一个 logo

    Liuli 项目需要一个 logo

    项目名称来源,群友 @ Sngxpro 提供:

    代号:琉璃(Liuli)
    
    英文:RuriElysion
     or:RuriWorld
    
    slogan:琉璃开净界,薜荔启禅关 ---梅尧臣《缑山子晋祠 会善寺》
    
    寓意:构建一方净土如东方琉璃净世界。《药师经》云:「然彼佛土,一向清净,无有女人,亦无恶趣,及苦音声。」
    
    help wanted 
    opened by howie6879 7
  • 希望能在RSS订阅里面包含~原始文章链接

    希望能在RSS订阅里面包含~原始文章链接

    image

    目前打算写一个脚本,通过全文获取API来去获取全文,在根据自定义的格式寄给我的gmail...这样除了newsletter之外,一些RSS订阅和微信公众号都可以直接在spark阅读...

    然而我找到的全文获取的付费api要求有些高,RSS里面的link格式不行,就算经过decodeURIComponent函数转换也还是格式不正确。

    如果RSS订阅有原始网页的连接,就可以抓取用原始链接来获取全文而不会出错!

    希望作者可以给与支持!感谢:)

    opened by CenBoMin 5
  • 希望增加功能,取消生成的RSS中的updated的变动

    希望增加功能,取消生成的RSS中的updated的变动

    截取一部分生成的RSS信息如下,此处的 updated 日期,为liuli在周期性运行的过程中更新时的时间,即使对于一条很久以前的RSS信息,它的 updated 也会被更新到当前时间。

    <entry>
        <id>liuli_wechat - 谷歌开发者 - 社区说|TensorFlow 在工业视觉中的落地</id>
        <title>社区说|TensorFlow 在工业视觉中的落地 </title>
        <updated>2022-05-28T13:17:35.903720+00:00</updated>
        <author>
            <name>liuli_wechat - GDG</name>
        </author>
        <content/>
        <link href="https://ddns.ysmox.com:8766/backup/liuli_wechat/谷歌开发者/%E7%A4%BE%E5%8C%BA%E8%AF%B4%EF%BD%9CTensorFlow%20%E5%9C%A8%E5%B7%A5%E4%B8%9A%E8%A7%86%E8%A7%89%E4%B8%AD%E7%9A%84%E8%90%BD%E5%9C%B0" rel="alternate"/>
        <published>2022-05-25T17:30:46+08:00</published>
    </entry>
    

    这样会引起一些问题,在某些RSS订阅器上(如Tiny Tiny RSS),其时间轴上是根据 updated 来排序,而并非 published,如此一来,无法有效地区分当前的RSS哪些内容是最近生成的,哪些又是以前生成过的。

    所以希望保留 updated 的时间不变(如第一次存到mongodb中时,记录当前时间;若周期性更新时则不改变其值)或者与 published 保持一致。

    最后,希望我已经清楚地表达了我的问题和请求,谢谢!

    enhancement 
    opened by YsMox 3
  • 爬取微信公众号的Demo执行失败

    爬取微信公众号的Demo执行失败

    参考的https://mp.weixin.qq.com/s/rxoq97YodwtAdTqKntuwMA 刚起了demo试着爬一下微信公众号的内容,但是日志里显示执行失败了。

    Loading .env environment variables...
    [2022:05:09 10:55:45] INFO  Liuli Schedule(v0.2.4) task([email protected]_team) started successfully :)
    [2022:05:09 10:55:45] INFO  Liuli Task([email protected]_team) schedule time:
     00:10
     12:10
     21:10
    [2022:05:09 10:55:45] ERROR Liuli 执行失败!'doc_source'
    

    文章里给你docker compose配置文件里使用的liuli schedule镜像版本是不带playwright的,我看文章里提供的default的json里描述的使用playwright爬取微信内容,尝试着更改为了带playwright的版本,也显示执行失败。

    opened by Colin-XKL 3
  • 抓取公众号文章时,时间格式清洗出错

    抓取公众号文章时,时间格式清洗出错

    测试脚本如下:

    from src.collector.wechat_feddd.start import WeiXinSpider
    WeiXinSpider.request_config = {"RETRIES": 3, "DELAY": 5, "TIMEOUT": 20}
    WeiXinSpider.start_urls = ['https://mp.weixin.qq.com/s/OrCRVCZ8cGOLRf5p5avHOg']
    WeiXinSpider.start()
    

    错误原因: 数据清洗时,期望的数据格式是 2022-03-21 20:59,但实际抓取回来的数据是 2022-03-22 20:37:12,导致 clean_doc_ts函数报错。如下图 image

    opened by showthesunli 3
  • 动态获取企业微信分发部门ID参数

    动态获取企业微信分发部门ID参数

    新增两个配置项:

    # 企业微信分发用户(填写用户帐号,不区分大小写),多个用户用;分割
    CC_WECOM_TO_USER=""
    # 企业微信分发部门(填写部门名称),多个部门用;分割
    CC_WECOM_PARTY=""
    

    如两项都不填写,默认向当前应用所有部门的所有用户分发,如用户填写,则按用户填写的配置进行分发

    opened by zyd16888 1
  • 0.24版本参照教程无法启动schedule

    0.24版本参照教程无法启动schedule

    如果按照教程手动添加pro.env文件,无法启动docker,但是如果不手动添加文件,启动docker的话会自动创建pro.env文件夹,然后docker会循环输出如下日志 Loading .env environment variables... Start schedule(pro) serve: PIPENV_DOTENV_LOCATION=./pro.env pipenv run python src/liuli_schedule.py Warning: file PIPENV_DOTENV_LOCATION=./pro.env does not exist!! Not loading environment variables. Process Process-1: Traceback (most recent call last): File "/usr/local/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/data/code/src/liuli_schedule.py", line 84, in run_liuli_schedule ll_config = json.load(load_f) File "/usr/local/lib/python3.9/json/init.py", line 293, in load return loads(fp.read(), File "/usr/local/lib/python3.9/json/init.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.9/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/lib/python3.9/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None

    opened by zhyueyueniao 1
  • 分发器支持

    分发器支持

    目前计划支持将文章输出到如下终端:

    • [x] 钉钉,比较开放,方便介入,推荐 @howie6879
    • [x] 微信,可考虑企业微信或Hook @howie6879
    • [x] RSS生成器模块 @howie6879
    • [x] TG @123seven
    • [x] Bark @LeslieLeung
    • [ ] 飞书

    更多分发终端需求大家可在评论区请求支持

    enhancement help wanted 
    opened by howie6879 14
Releases(v0.2.0)
  • v0.2.0(Feb 10, 2022)

    v0.2.0 2022-02-11

    liuli v0.2.0 👏 成功发布,看板计划见这里,相关特性和功能提升见下方描述。

    提升:

    • 部分代码重构,重命名为 liuli
    • 提升部署效率,支持docker-compose #17
    • 项目容量从100m缩小到3m(移除模型)

    修复:

    • 分发器:企业微信分发部门ID参数不定 #16 @zyd16888
    • 修复含有特殊字符密码链接失败 #35 @gclm

    特性:

    Source code(tar.gz)
    Source code(zip)
Owner
howie.hu
奇文共欣赏,疑义相与析
howie.hu
Sequence-to-Sequence Framework in PyTorch

nmtpytorch allows training of various end-to-end neural architectures including but not limited to neural machine translation, image captioning and au

LIUM 395 Nov 21, 2022
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.

Kashgari Overview | Performance | Installation | Documentation | Contributing 🎉 🎉 🎉 We released the 2.0.0 version with TF2 Support. 🎉 🎉 🎉 If you

Eliyar Eziz 2.3k Dec 29, 2022
BERT, LDA, and TFIDF based keyword extraction in Python

BERT, LDA, and TFIDF based keyword extraction in Python kwx is a toolkit for multilingual keyword extraction based on Google's BERT and Latent Dirichl

Andrew Tavis McAllister 41 Dec 27, 2022
Voilà turns Jupyter notebooks into standalone web applications

Rendering of live Jupyter notebooks with interactive widgets. Introduction Voilà turns Jupyter notebooks into standalone web applications. Unlike the

Voilà Dashboards 4.5k Jan 03, 2023
Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17

2017 VQA Challenge Winner (CVPR'17 Workshop) pytorch implementation of Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challeng

Mark Dong 166 Dec 11, 2022
Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.

Linear Transformers Are Secretly Fast Weight Programmers This repository contains the code accompanying the paper Linear Transformers Are Secretly Fas

Imanol Schlag 77 Dec 19, 2022
Legal text retrieval for python

legal-text-retrieval Overview This system contains 2 steps: generate training data containing negative sample found by mixture score of cosine(tfidf)

Nguyễn Minh Phương 22 Dec 06, 2022
Weaviate demo with the text2vec-openai module

Weaviate demo with the text2vec-openai module This repository contains an example of how to use the Weaviate text2vec-openai module. When using this d

SeMI Technologies 11 Nov 11, 2022
Implementation for paper BLEU: a Method for Automatic Evaluation of Machine Translation

BLEU Score Implementation for paper: BLEU: a Method for Automatic Evaluation of Machine Translation Author: Ba Ngoc from ProtonX BLEU score is a popul

Ngoc Nguyen Ba 6 Oct 07, 2021
PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

StyleSpeech - PyTorch Implementation PyTorch Implementation of Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation. Status (2021.06.09

Keon Lee 142 Jan 06, 2023
Quick insights from Zoom meeting transcripts using Graph + NLP

Transcript Analysis - Graph + NLP This program extracts insights from Zoom Meeting Transcripts (.vtt) using TigerGraph and NLTK. In order to run this

Advit Deepak 7 Sep 17, 2022
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3.9k Jan 03, 2023
Two-stage text summarization with BERT and BART

Two-Stage Text Summarization Description We experiment with a 2-stage summarization model on CNN/DailyMail dataset that combines the ability to filter

Yukai Yang (Alexis) 6 Oct 22, 2022
Common Voice Dataset explorer

Common Voice Dataset Explorer Common Voice Dataset is by Mozilla Made during huggingface finetuning week Usage pip install -r requirements.txt streaml

Ceyda Cinarel 22 Nov 16, 2022
LCG T-TEST USING EUCLIDEAN METHOD

This project has been created for statistical usage, purposing for determining ATL takers and nontakers using LCG ttest and Euclidean Method, especially for internal business case in Telkomsel.

2 Jan 21, 2022
Code for Findings of ACL 2022 Paper "Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors"

SWRM Code for Findings of ACL 2022 Paper "Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors" Clone Clone th

14 Jan 03, 2023
A single model that parses Universal Dependencies across 75 languages.

A single model that parses Universal Dependencies across 75 languages. Given a sentence, jointly predicts part-of-speech tags, morphology tags, lemmas, and dependency trees.

Dan Kondratyuk 189 Nov 29, 2022
An implementation of the Pay Attention when Required transformer

Pay Attention when Required (PAR) Transformer-XL An implementation of the Pay Attention when Required transformer from the paper: https://arxiv.org/pd

7 Aug 11, 2022