dyld_shared_cache processing / Single-Image loading for BinaryNinja

Overview

Dyld Shared Cache Parser

Author: cynder (kat)

Dyld Shared Cache Support for BinaryNinja

BinaryNinja Screenshot

BinaryNinja Screenshot

Without any of the fuss of requiring manually loading several unrelated images, or the awful off-image addresses, and with better output than IDA, Hopper, or any other disassembler on the market.

Installation + Usage

  1. Open the plugin manager
  2. Search for "Dyld" and install this plugin

Usage:

  1. Open Dyld Shared Cache file with BN
  2. Select the Image you would like to disassemble
  3. Congrats, you are now Reverse Engineering the Mach-O

Description:

This project acts as an interface for two seperate projects; DyldExtractor, and ktool. Mainly DyldExtractor.

DyldExtractor is a project written primarily by 'arandomdev' designed for CLI standalone dyld_shared_cache extraction. It is the best tool for the job, and reverses the majority of "optimizations" that make DSC reverse engineering ugly and painful. Utilizing this plugin, Binja's processing should outperform IDAs, and wont require IDA's need for repeatedly right clicking and manually loading tons of modules.

This version of DyldExtractor has a lot of modifications (read: a lot of commented out lines) from the original designed to make it function better in the binja environment.

ktool is a multifaceted project I wrote for, primarily, MachO + ObjC Parsing.

It is mainly used for super basic parsing of the output, as we need to properly write the segments to the VM (and scrap all the dsc data that was originally in this file) so the Mach-O View knows how to parse it.

License

This plugin, along with ktool and dyldextractor are released under an MIT license. Both of these plugins are vendored within this project to make installation slightly simpler.

You might also like...
《Single Image Reflection Removal Beyond Linearity》(CVPR 2019)

Single-Image-Reflection-Removal-Beyond-Linearity Paper Single Image Reflection Removal Beyond Linearity. Qiang Wen, Yinjie Tan, Jing Qin, Wenxi Liu, G

Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Learning to Reconstruct 3D Manhattan Wireframes from a Single Image
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions. Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

Selective Wavelet Attention Learning for Single Image Deraining

SWAL Code for Paper "Selective Wavelet Attention Learning for Single Image Deraining" Prerequisites Python 3 PyTorch Models We provide the models trai

PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

Code for generating a single image pretraining dataset
Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Comments
  • TypeError: cannot unpack non-iterable NoneType object

    TypeError: cannot unpack non-iterable NoneType object

    Tried this just now, and got this, trying to extract the macOS 13.1 x86_64h cache:

    Successfully installed: Dyld Shared Cache Processor
    Loaded python3 plugin 'cxnder_bndyldsharedcache'
    Traceback (most recent call last):
      File "/Applications/Binary Ninja.app/Contents/MacOS/plugins/../../Resources/python/binaryninja/binaryview.py", line 2818, in _init
        return self.init()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/dsc.py", line 101, in init
        stub_fixer.fixStubs(extraction_ctx)
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 1681, in fixStubs
        _StubFixer(extractionCtx).run()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 1011, in run
        self._symbolizer = _Symbolizer(self._extractionCtx)
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 59, in __init__
        self._enumerateExports()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 101, in _enumerateExports
        if depInfo := self._getDepInfo(dylib, self._machoCtx):
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 179, in _getDepInfo
        imageOff, dyldCtx = self._dyldCtx.convertAddr(imageAddr)
    TypeError: cannot unpack non-iterable NoneType object
    BinaryView of type 'DyldSharedCache' failed to initialize!
    No available/valid debug info parsers for `Raw` view
    Found more than 'analysis.limits.stringSearch' (0x100000) strings aborting search for range: 0 - 0x33be0000
    Analysis update took 12.239 seconds
    
    
    opened by torarnv 1
  • prep for plugin manager

    prep for plugin manager

    Looks like only two changes are required to get this added to the BN plugin manager. The first is to add a requirements.txt -- while ktool and DyldExtractor are versioned, capstone is still a requirement of DyldExtractor so it would be nice to expose that.

    Or, better yet, replace the disassembler with BN's own disassembly to remove the dependency entirely. That also means there's no need to hack around the lack of PAC instructions as BN can disassemble those just fine.

    The other step is to make a release, then we can add the plugin directly to the plugin manager which would be really handy!

    opened by psifertex 1
  • fix relative imports for built-in BN Py 3.8.9 on MacOS

    fix relative imports for built-in BN Py 3.8.9 on MacOS

    I'm not sure whether it's the exact python version or the fact that I'm using the BN shipped Python versus homebrew / ports but I'm unable to use the plugin as-is on MacOS without this change. I don't know how much this versioned DyldExtractor has differed, happy to test/submit upstream in the parent repo if you prefer.

    opened by psifertex 0
Releases(1.0.0)
Owner
cynder
macOS/iOS development @ reverse engineering chick. // maintainer of the iPhone Dev Wiki (https://iphonedev.wiki)
cynder
SPT_LSA_ViT - Implementation for Visual Transformer for Small-size Datasets

Vision Transformer for Small-Size Datasets Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song | Paper Inha University Abstract Recently, the Vision

Lee SeungHoon 87 Jan 01, 2023
(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

yuta-saito 19 Dec 01, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Technical experimentations to beat the stock market using deep learning :chart_with_upwards_trend:

DeepStock Technical experimentations to beat the stock market using deep learning. Experimentations Deep Learning Stock Prediction with Daily News Hea

Keon 449 Dec 29, 2022
Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach

Introduction Datasets and source code for our paper Webly Supervised Fine-Grained Recognition: Benchmark Datasets and An Approach Datasets: WebFG-496

21 Sep 30, 2022
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
Public implementation of "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression" from CoRL'21

Self-Supervised Reward Regression (SSRR) Codebase for CoRL 2021 paper "Learning from Suboptimal Demonstration via Self-Supervised Reward Regression "

19 Dec 12, 2022
List of awesome things around semantic segmentation 🎉

Awesome Semantic Segmentation List of awesome things around semantic segmentation 🎉 Semantic segmentation is a computer vision task in which we label

Dam Minh Tien 18 Nov 26, 2022
A new play-and-plug method of controlling an existing generative model with conditioning attributes and their compositions.

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

NVIDIA Research Projects 66 Jan 01, 2023
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support mnist, svhn cifar10, cifar100 st

Aaron Chen 2.4k Dec 28, 2022
Code for "Hierarchical Skills for Efficient Exploration" HSD-3 Algorithm and Baselines

Hierarchical Skills for Efficient Exploration This is the source code release for the paper Hierarchical Skills for Efficient Exploration. It contains

Facebook Research 38 Dec 06, 2022
The mini-AlphaStar (mini-AS, or mAS) - mini-scale version (non-official) of the AlphaStar (AS)

A mini-scale reproduction code of the AlphaStar program. Note: the original AlphaStar is the AI proposed by DeepMind to play StarCraft II.

Ruo-Ze Liu 216 Jan 04, 2023
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
Multi-layer convolutional LSTM with Pytorch

Convolution_LSTM_pytorch Thanks for your attention. I haven't got time to maintain this repo for a long time. I recommend this repo which provides an

Zijie Zhuang 733 Dec 30, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
Single object tracking and segmentation.

Single/Multiple Object Tracking and Segmentation Codes and comparison of recent single/multiple object tracking and segmentation. News 💥 AutoMatch is

ZP ZHANG 385 Jan 02, 2023
Generating Radiology Reports via Memory-driven Transformer

R2Gen This is the implementation of Generating Radiology Reports via Memory-driven Transformer at EMNLP-2020. Citations If you use or extend our work,

CUHK-SZ NLP Group 101 Dec 13, 2022
We will release the code of "ConTNet: Why not use convolution and transformer at the same time?" in this repo

ConTNet Introduction ConTNet (Convlution-Tranformer Network) is proposed mainly in response to the following two issues: (1) ConvNets lack a large rec

93 Nov 08, 2022
Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)

Self-Tuning for Data-Efficient Deep Learning This repository contains the implementation code for paper: Self-Tuning for Data-Efficient Deep Learning

THUML @ Tsinghua University 101 Dec 11, 2022
GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️

GAT - Graph Attention Network (PyTorch) 💻 + graphs + 📣 = ❤️ This repo contains a PyTorch implementation of the original GAT paper ( 🔗 Veličković et

Aleksa Gordić 1.9k Jan 09, 2023