Intercepting proxy + analysis toolkit for Second Life compatible virtual worlds

Overview

Hippolyzer

Python Test Status codecov

Hippolyzer is a revival of Linden Lab's PyOGP library targeting modern Python 3, with a focus on debugging issues in Second Life-compatible servers and clients. There is a secondary focus on mocking up new features without requiring a modified server or client.

Wherever reasonable, readability and testability are prioritized over performance.

Almost all code from PyOGP has been either rewritten or replaced. Major changes from upstream include making sure messages always correctly round-trip, and the addition of a debugging proxy similar to ye olde WinGridProxy.

It supports hot-reloaded addon scripts that can rewrite, inject or drop messages. Also included are tools for working with SL-specific assets like the internal animation format, and the internal mesh format.

It's quick and easy to bash together a script that does something useful if you're familiar with low-level SL details. See the Local Animation addon example.

Screenshot of proxy GUI

Setup

From Source

  • Python 3.8 or above is required. If you're unable to upgrade your system Python package due to being on a stable distro, you can use pyenv to create a self-contained Python install with the appropriate version.
  • Create a clean Python 3 virtualenv with python -mvenv
  • Activate the virtualenv by running the appropriate activation script
    • Under Linux this would be something like source /bin/activate
    • Under Windows it's \Scripts\activate.bat
  • Run pip install hippolyzer, or run pip install -e . in a cloned repo to install an editable version

Binary Windows Builds

Binary Windows builds are available on the Releases page. I don't extensively test these, building from source is recommended.

Proxy

A proxy is provided with both a CLI and Qt-based interface. The proxy application wraps a custom SOCKS 5 UDP proxy, as well as an HTTP proxy based on mitmproxy.

Multiple clients are supported at a time, and UDP messages may be injected in either direction. The proxy UI was inspired by the Message Log and Message Builder as present in the Alchemy viewer.

Proxy Setup

  • Run the proxy with hippolyzer-gui
    • Addons can be loaded through the File -> Manage Addons menu or on the command-line like hippolyzer-gui addon_examples/bezoscape.py
    • If you want the command-line version, run hippolyzer-cli
  • Install the proxy's HTTPS certificate by going to File -> Install HTTPS Certs
    • You can also install it with hippolyzer-cli --setup-ca . On Linux that would be ~/.firestorm_x64/ if you're using Firestorm.
    • Certificate validation can be disabled entirely through viewer debug setting NoVerifySSLCert, but is not recommended.

Windows

Windows viewers have broken SOCKS 5 proxy support. To work around that, you need to use a wrapper EXE that can make the viewer to correctly talk to Hippolyzer. Follow the instructions on https://github.com/SaladDais/WinHippoAutoProxy to start the viewer and run it through Hippolyzer.

The proxy should not be configured through the viewer's own preferences panel, it won't work correctly.

OS X & Linux

SOCKS 5 works correctly on these platforms, so you can just configure it through the preferences -> network -> proxy settings panel:

  • Start the viewer and configure it to use 127.0.0.1:9061 as a SOCKS proxy and 127.0.0.1:9062 as an HTTP proxy. You must select the option in the viewer to use the HTTP proxy for all HTTP traffic, or logins will fail.
  • Optionally, If you want to reduce HTTP proxy lag you can have asset requests bypass the HTTP proxy by setting the no_proxy env var appropriately. For ex. no_proxy="asset-cdn.glb.agni.lindenlab.com" ./firestorm.
  • Log in!

Filtering

By default, the proxy's display filter is configured to ignore many high-frequency messages. The filter field allows filtering on the presence of specific blocks or the values of variables.

For example, to find either chat messages mentioning "foo" or any message referencing 125214 in an ID field you could use ChatFrom*.ChatData.Message~="foo" || *.*.*ID==125214. To find all ObjectUpdates related to object ID 125214 you could do *ObjectUpdate*.ObjectData.*ID==125214 || *ObjectUpdate*.ObjectData.Data.*ID==125214 to parse through both templated fields and fields inside the binary Data fields for compressed and terse object updates.

Messages also have metadata attached that can be matched on. To match on all kinds of ObjectUpdates that were related to the most recently selected object at the time the update was logged, you could do a filter like Meta.ObjectUpdateIDs ~= Meta.SelectedLocal

Similarly, if you have multiple active sessions and are only interested in messages related to a specific agent's session, you can do (Meta.AgentID == None || Meta.AgentID == "d929385f-41e3-4a34-a04e-f1fc39f24f12") && ....

Vectors can also be compared. This will get any ObjectUpdate variant that occurs within a certain range: (*ObjectUpdate*.ObjectData.*Data.Position > (110, 50, 100) && *ObjectUpdate*.ObjectData.*Data.Position < (115, 55, 105))

If you want to compare against an enum or a flag class in defined in templates.py, you can just specify its name: ViewerEffect.Effect.Type == ViewerEffectType.EFFECT_BEAM

Logging

Decoded messages are displayed in the log pane, clicking one will show the request and response for HTTP messages, and a human-friendly form for UDP messages. Some messages and fields have special packers defined that will give a more human-readable form of enum or binary fields, with the original form beside or below it.

For example, an AgentUpdate message may show up in the log pane like:

OUT AgentUpdate
# 15136: 
   
    

[AgentData]
  AgentID = [[AGENT_ID]]
  SessionID = [[SESSION_ID]]
  BodyRotation = (0.0, 0.0, 0.06852579861879349, 0.9976493446715918)
  HeadRotation = (-0.0, -0.0, 0.05799926817417145, 0.998316625570896)
  # Many flag fields are unpacked as tuples with the original value next to them
  State =| ('EDITING',) #16
  CameraCenter = <120.69703674316406, 99.8336181640625, 59.547847747802734>
  CameraAtAxis = <0.9625586271286011, 0.11959066987037659, -0.243267223238945>
  CameraLeftAxis = <-0.12329451739788055, 0.992370069026947, 0.0>
  CameraUpAxis = <0.24141110479831696, 0.029993515461683273, 0.9699592590332031>
  Far = 88.0
  ControlFlags =| ('YAW_POS', 'NUDGE_AT_POS') #524544
  Flags =| ('HIDE_TITLE',) #1

   

and an ObjectImage for setting a prim's texture may look like

OUT ObjectImage
# 3849: 
   
    

[AgentData]
  AgentID = [[AGENT_ID]]
  SessionID = [[SESSION_ID]]
[ObjectData]
  ObjectLocalID = 700966
  MediaURL = b''
  TextureEntry =| {'Textures': {None: '89556747-24cb-43ed-920b-47caed15465f'}, \
     'Color': {None: b'\xff\xff\xff\xff'}, \
     'ScalesS': {None: 1.0}, \
     'ScalesT': {None: 1.0}, \
     'OffsetsS': {None: 0}, \
     'OffsetsT': {None: 0}, \
     'Rotation': {None: 0}, \
     'BasicMaterials': {None: {'Bump': 0, 'FullBright': False, 'Shiny': 'MEDIUM'}}, \
     'MediaFlags': {None: {'WebPage': False, 'TexGen': 'DEFAULT', '_Unused': 0}}, \
     'Glow': {None: 0}, \
     'Materials': {None: '00000000-0000-0000-0000-000000000000'}}
  #TextureEntry = b'\x89UgG$\xcbC\xed\x92\x0bG\xca\xed\x15F_\x00\x00\x00\x00\x00\x00\x00\x00\x80?\x00\x00\x00\x80?\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'

   

All unpackers also provide equivalent packers that work with the message builder. The scripting interface uses the same packers as the logging interface, but uses a different representation. Clicking the "Copy repr()" will give you a version of the message that you can paste into an addon's script.

Building Messages

The proxy GUI includes a message builder similar to Alchemy's to allow building arbitrary messages, or resending messages from the message log window. Both UDP and Caps messages may be sent.

For example, here's a message that will drop a physical cube on your head:

Rotation = <0.0, 0.0, 0.0, 1.0> State = 0">
OUT ObjectAdd

[AgentData]
  # [[]] in a field value indicates a simple replacement
  # provided by the proxy
  AgentID = [[AGENT_ID]]
  SessionID = [[SESSION_ID]]
  GroupID = [[NULL_KEY]]
[ObjectData]
  # =| means the we should use the field's special packer mode
  # We treat PCode as an enum, so we'll convert from its string name to its int val
  PCode =| 'PRIMITIVE'
  Material = 3
  # With =| you may represent flags as a tuple of strings rather than an int
  # The only allowed flags in ObjectAdd are USE_PHYSICS (1) and CREATE_SELECTED (2)
  AddFlags =| ('USE_PHYSICS',)
  PathCurve = 16
  ProfileCurve = 1
  PathBegin = 0
  PathEnd = 0
  PathScaleX = 100
  PathScaleY = 100
  PathShearX = 0
  PathShearY = 0
  PathTwist = 0
  PathTwistBegin = 0
  PathRadiusOffset = 0
  PathTaperX = 0
  PathTaperY = 0
  PathRevolutions = 0
  PathSkew = 0
  ProfileBegin = 0
  ProfileEnd = 0
  ProfileHollow = 0
  BypassRaycast = 1
  # =$ indicates an eval()ed field, this will result in a vector 3m above the agent.
  RayStart =$ AGENT_POS + Vector3(0, 0, 3)
  # We can reference whatever we put in `RayStart` by accessing `block`
  RayEnd =$ block["RayStart"]
  RayTargetID = [[NULL_KEY]]
  RayEndIsIntersection = 0
  Scale = <0.5, 0.5, 0.5>
  Rotation = <0.0, 0.0, 0.0, 1.0>
  State = 0

The repeat spinner at the bottom of the window lets you send a message multiple times. an i variable is put into the eval context and can be used to vary messages across repeats. With repeat set to two:

OUT ChatFromViewer

[AgentData]
  AgentID = [[AGENT_ID]]
  SessionID = [[SESSION_ID]]
[ChatData]
  # Simple templated f-string
  Message =$ f'foo {i * 2}'
  Type =| 'NORMAL'
  Channel = 0

will print

User: foo 0
User: foo 2
User: foo 4

HTTP requests may be sent through the same window, with equivalent syntax for replacements and eval() within the request body, if requested. As an example, sending a chat message through the UntrustedSimulatorMessage cap would look like:

Type 1 ">
POST [[UntrustedSimulatorMessage]] HTTP/1.1
Content-Type: application/llsd+xml
Accept: application/llsd+xml
X-Hippo-Directives: 1


   

    
   
     
      message
     
    
     
      ChatFromViewer
     
   
     
      body
     
    
     
      
       AgentData
      
      
      
        
        
        
         AgentID
         
        
         
         
        
         SessionID
         
        
         
         
      
      
     
      
       ChatData
      
      
      
        
        
        
         Channel
         
        
         0
         
        
         Message
         
        
         test 
         
        
         Type
         
        
         1
         
      
      
    
  

   

Addon commands

By default, channel 524 is a special channel used for commands handled by addons' handle_command hooks. For ex, an addon that supplies a foo with one string parameter can be called by typing /524 foo something in chat.

/524 help will give you a list of all commands offered by currently loaded addons.

Useful Extensions

These are quick and dirty, but should be viewer features. I'm not a viewer developer, so they're here. If you are a viewer developer, please put them in a viewer.

  • Local Animation - Allows loading and playing animations in LL's internal format from disk, replaying when the animation changes on disk. Mostly useful for animators that want quick feedback
  • Local Mesh - Allows specifying a target object to apply a mesh preview to. When a local mesh target is specified, hitting the "calculate upload cost" button in the mesh uploader will instead apply the mesh to the local mesh target. It works on attachments too. Useful for testing rigs before a final, real upload.

Potential Changes

  • AISv3 wrapper?
  • Higher level wrappers for common things? I don't really need these, so only if people want to write them.
  • Move things out of templates.py, right now most binary serialization stuff lives there because it's more convenient for me to hot-reload.
  • Ability to add menus?

License

LGPLv3. If you have a good reason why, I might dual license.

This package includes portions of the Second Life(TM) Viewer Artwork, Copyright (C) 2008 Linden Research, Inc. The viewer artwork is licensed under the Creative Commons Attribution-Share Alike 3.0 License.

Contributing

Ensure that any patches are clean with no unnecessary whitespace or formatting changes, and that you add new tests for any added functionality.

Philosophy

With a few notable exceptions, Hippolyzer focuses mainly on decomposition of data, and doesn't provide many high-level abstractions for interpreting or manipulating that data. It's careful to only do lossless transforms on data that are just prettier representations of the data sent over the wire. Hippolyzer's goal is to help people understand how Second Life actually works, automatically employing abstractions that hide how SL works is counter to that goal.

For Client Developers

This section is mostly useful if you're developing a new SL-compatible client from scratch. Clients based on LL's will work out of the box.

Adding proxy support to a new client

Hippolyzer's proxy application actually combines two proxies, a SOCKS 5 UDP proxy and an HTTP proxy.

To have your client's traffic proxied through Hippolyzer the general flow is:

  • Open a TCP connection to Hippolyzer's SOCKS 5 proxy port
    • This should be done once per logical user session, as Hippolyzer assumes a 1:1 mapping of SOCKS TCP connections to SL sessions
  • Send a UDP associate command without authentication
  • The proxy will respond with a host / port pair that UDP messages may be sent through
  • At this point you will no longer need to use the TCP connection, but it must be kept alive until you want to break the UDP association
  • Whenever you send a UDP packet to a remote host, you'll need to instead send it to the host / port from the UDP associate response. A SOCKS 5 header must be prepended to the data indicating the ultimate destination of the packet
  • Any received UDP packets will also have a SOCKS 5 header indicating the real source IP and address
    • When in doubt, check socks_proxy.py, packets.py and the SOCKS 5 RFC for more info on how to deal with SOCKS.
  • All HTTP requests must be sent through the Hippolyzer's HTTP proxy port.
    • You may not need to do any extra plumbing to get this to work if your chosen HTTP client respects the HTTP_PROXY environment variable.
  • All HTTPS connections will be encrypted with the proxy's TLS key. You'll need to either add it to whatever CA bundle your client uses or disable certificate validation when a proxy is used.
    • mitmproxy does its own certificate validation so disabling it in your client is OK.
  • The proxy needs to use content sniffing to figure out which requests are login requests, so make sure your request would pass MITMProxyEventManager._is_login_request()

Do I have to do all that?

You might be able to automate some of it on Linux by using LinHippoAutoProxy. If you're on Windows or MacOS the above is your only option.

Should I use this library to make an SL client in Python?

No. If you just want to write a client in Python, you should instead look at using libremetaverse via pythonnet. I removed the client-related code inherited from PyOGP because libremetaverse's was simply better.

https://github.com/CasperTech/node-metaverse/ also looks like a good, modern wrapper if you prefer TypeScript.

Comments
  • LEAP bridge and associated std(in/out) -> TCP forwarding agent

    LEAP bridge and associated std(in/out) -> TCP forwarding agent

    SL's viewer has had automation support through a subprocess + std(in/out) communication scheme for quite a while now. Recently it's been used for the Puppetry work, but it's generally useful for a number of other things, including basic UI automation and high-level event subscription.

    Using LEAP the official way is annoying because you're bound by what interfaces are exposed over LEAP, and fiddling with the scripts usually means restarting the viewer over and over to get things right. You also can't use stdin or stdout in your scripts for obvious reasons.

    It'd be nice to allow proxied viewers to be automated through hot-reloaded addons or the REPL via the LEAP API. Easiest thing to do would be to make a netcat-style std(in/out) -> TCP forwarding agent. Hippolyzer could receive inbound connections from those agents and be able to control multiple LEAP-automated viewers at once. LEAP connections could be associated with Session objects through data extracted via the LEAP API.

    enhancement 
    opened by SaladDais 4
  • SOCKS5 proxying broken with official Windows viewers

    SOCKS5 proxying broken with official Windows viewers

    Didn't notice this since I was only running the proxy in a Windows VM. As described in https://jira.secondlife.com/browse/BUG-134040 Windows viewers write broken SOCKS 5 commands on the wire and expect broken commands back. Seems to be due to struct padding differences cross-compiler when the viewer assumes structs will be un-padded and copies them directly to the wire.

    I don't think this is ever getting patched and we want to support existing broken viewers. Sniffing for trailing nulls past the end of the authentication methods field in the handshake should allow us to a invoke "broken WINSOCKS mode".

    bug 
    opened by SaladDais 3
  • Update docs to mention how to work around Firestorm proxy issues

    Update docs to mention how to work around Firestorm proxy issues

    Firestorm didn't pull https://bitbucket.org/lindenlab/viewer/commits/454c7f4543688126b2fa5c0560710f5a1733702e into their latest release, so manually specifying a proxy is broken. Mention LinHippoAutoProxy or the debug settings or something. I don't particularly care about CEF / Dullahan proxy support.

    enhancement 
    opened by SaladDais 2
  • Teleports sometimes cause a disconnect when attempted right before Event Queue's long-polling timeout

    Teleports sometimes cause a disconnect when attempted right before Event Queue's long-polling timeout

    The sim assumes that once it's sent the TeleportFinish event over the event queue, it can kill the event queue cap and that the viewer has been handed off to the new region. If viewer times out the EQ connection just as the TeleportFinish is sent, then the proxy will have read the TeleportFinish response, but the viewer won't have. This should be covered by the EventQueue's explicit acking mechanism, but it doesn't seem to work properly. It appears the server considers an event acked so long as the response bytes were sent off and immediately discards them. It should only discard messages once the viewer polls with an id that's not greater than the ack value POSTed by the viewer, but it discards them unconditionally. I'm not sure if this is intentional or if it's always been like this.

    Since the viewer won't know it was sent the TeleportFinish it will keep trying to read the event queue CAP, which will never re-serve the TeleportFinish. CrossedRegion probably has the same problem, I haven't tested.

    This seems to be a general problem with SL that's made worse when using an HTTP proxy, since the proxy may leave its connection to the server open and consume the event after the client timed out their connection. We can hack around that by always storing the last EQ response for a sim if there were events, along with the client's ack value in the request.

    The sim's EQ implementation will need to be changed to actually make use of the ack value that gets posted and discard events that haven't been acked for this to be fully fixed.

    bug 
    opened by SaladDais 2
  • Use numpy arrays for mesh coordinate serialization and quantization

    Use numpy arrays for mesh coordinate serialization and quantization

    Mesh parse time is currently dominated by deserialization and U16 quantized float -> float32 conversion.

    I like the declarative serialization interface of serialization.py, and I sort of like the existing coordinate classes, but they're stupidly slow for large, regular buffers of data like array data where we could benefit from numpy's vectorization.

    enhancement 
    opened by SaladDais 1
  • Rewrite git history to add pyogp commits with correct committer field

    Rewrite git history to add pyogp commits with correct committer field

    Because I used git am to import the pyogp commits from pyogp mercurial, it left the original authors as the author of the commit but me as the commiter. Github seems to count contributions by committer, so people who'd written code for pyogp aren't showing up in the contributors sidebar.

    Leaving it that way seems rude, so let's figure out a way to rebase on top of properly attributed commits. Will cause a conflict for anyone who has existing master checkouts but they can deal.

    enhancement 
    opened by SaladDais 1
  • Better EventQueue event injection support

    Better EventQueue event injection support

    Right now an event can only be injected when we actually receive an event over the real EQ due to how we're intercepting the response. That means injected events can be delayed by up to 30 seconds. Not nice.

    It might make sense to switch to a strategy where we hold onto the flow ID for event queue requests with pending server responses and preempt the server response by injecting our own response. It's not clear to me if mitmproxy has support for closing the server half of the proxied connection for an in-flight request and injecting a response, but that's the obvious choice.

    enhancement 
    opened by SaladDais 1
  • Bump mitmproxy from 7.0.2 to 7.0.3

    Bump mitmproxy from 7.0.2 to 7.0.3

    Bumps mitmproxy from 7.0.2 to 7.0.3.

    Release notes

    Sourced from mitmproxy's releases.

    v7.0.3

    • CVE-2021-39214: Fix request smuggling vulnerabilities reported by @​chinchila
    • Expose TLS 1.0 as possible minimum version on older pyOpenSSL releases
    • Fix compatibility with Python 3.10

    You can find the latest release packages at https://mitmproxy.org/downloads/.

    Changelog

    Sourced from mitmproxy's changelog.

    16 September 2021: mitmproxy 7.0.3

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Add message log loading / saving

    Add message log loading / saving

    Depends on #19 since importing should open a new log window. Would be helpful for complex issues that span multiple messages.

    For ex you could load a message log dump programmatically, then replay inbound ObjectUpdates through an ObjectManager to reconstruct the client's understanding of the scene graph at a particular point in time.

    enhancement 
    opened by SaladDais 1
  • Support reliable LLUDP message sending

    Support reliable LLUDP message sending

    Would be nice to support LLUDP's reliability mechanism for our injected messages. Messages should be kept in a list with the retry count and periodically re-sent until acked or the retry limit has been reached. Need to be careful to change outgoing StartPingChecks to take our injected packets into account for the OldestUnacked field.

    enhancement 
    opened by SaladDais 1
  • Bump urllib3 from 1.26.4 to 1.26.5

    Bump urllib3 from 1.26.4 to 1.26.5

    Bumps urllib3 from 1.26.4 to 1.26.5.

    Release notes

    Sourced from urllib3's releases.

    1.26.5

    :warning: IMPORTANT: urllib3 v2.0 will drop support for Python 2: Read more in the v2.0 Roadmap

    • Fixed deprecation warnings emitted in Python 3.10.
    • Updated vendored six library to 1.16.0.
    • Improved performance of URL parser when splitting the authority component.

    If you or your organization rely on urllib3 consider supporting us via GitHub Sponsors

    Changelog

    Sourced from urllib3's changelog.

    1.26.5 (2021-05-26)

    • Fixed deprecation warnings emitted in Python 3.10.
    • Updated vendored six library to 1.16.0.
    • Improved performance of URL parser when splitting the authority component.
    Commits
    • d161647 Release 1.26.5
    • 2d4a3fe Improve performance of sub-authority splitting in URL
    • 2698537 Update vendored six to 1.16.0
    • 07bed79 Fix deprecation warnings for Python 3.10 ssl module
    • d725a9b Add Python 3.10 to GitHub Actions
    • 339ad34 Use pytest==6.2.4 on Python 3.10+
    • f271c9c Apply latest Black formatting
    • 1884878 [1.26] Properly proxy EOF on the SSLTransport test suite
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Better handling of the various disparate inventory representations

    Better handling of the various disparate inventory representations

    Depending on how the inventory is requested and what is requesting it, inventory contents and events relating to them may have 5 or 6 different incompatible representations.

    • InventoryAPIv3: Ok, you get an LLSD object back, U32s are binary LLSD fields and type fields are strings.
    • FetchInventory2: Old and bad, but still used! You get a sort of similar LLSD object back except U32s aren't binary fields, they're just converted to S32 integers. Also all of the asset type / inv type fields are numeric rather than the string form of the types.
    • Login inventory skeleton: Pretty similar to the above, I guess, but not LLSD because it's in the XML-RPC payload.
    • BulkUpdateInventory, UpdateCreateInventoryItem, ...: Old templated message things that sometimes get sent as LLSD over the EQ, sometimes binary over UDP. Being that they're templated messages their structures are flat and fixed, but they're relatively easy to reason about if you remember the block names for whatever particular message.
    • Object inventories: Weird old proprietary textual schema similar to the one that skins and shapes use. Nasty to parse and serialize unambiguously, so other uses have moved away from it.
    • Inventory cache: Sort of similar to InventoryAPIv3 format, except everything is serialized in newline-delimited LLSD notation format.

    Right now only AIS and legacy schema formats are supported well, with some conversion functions translating between AIS and the templated UDP stuff. Need better support for the other representations of inventory updates, and to add in-place updating of inventory models to apply them.

    enhancement 
    opened by SaladDais 0
  • LLMesh -> Collada -> LLMesh conversion code to allow for in-proxy mesh upload

    LLMesh -> Collada -> LLMesh conversion code to allow for in-proxy mesh upload

    This would be a good first step to writing mesh upload code totally independent of the viewer, allowing prototyping of new importer features like glTF support (via https://github.com/SaladDais/impasse,) as well as file watchers for the local mesh feature.

    Having code to allow round-tripping LLMesh -> Collada -> LLMesh is probably the easiest way to ensure we haven't gotten confused about LLMesh or Collada semantics in our import code.

    We should make an example .dae that uses joint positions, vertex weights, multiple materials and multiple instances of the same mesh data, and then log the LLMesh-serialized version the viewer's uploader sends off to make sure that:

    • Our own conversion code gives the same result as the viewer when converting the .dae to LLMesh (assuming no normal or LoD generation)
    • Our LLMesh upload data -> Collada converter generates a .dae that's semantically equivalent to the original input .dae
    enhancement 
    opened by SaladDais 1
  • Prevent GUI paints from blocking proxy activity and vice-versa

    Prevent GUI paints from blocking proxy activity and vice-versa

    Right now all code other than the mitmproxy wrapper runs in the same process and thread. This leads to issues where GUI paints can block proxy activity and vice-versa, mostly noticeable when a lot of messages are being logged at once.

    On the one hand I don't normally notice the perf hit, and having everything on one thread allows writing relatively simple GUI code for addons without needing to use signals / slots (like the blueish object list.) On the other hand, a few people have told me that they disable the message log for perf reasons.

    In the short term, it'd make sense to batch up additions to the message log list and only try to draw every 0.1 seconds, like we did before.

    Longer-term, I'll have to come up with cross-thread implementations of the AddonManager.UI APIs using signals and slots, and figure out how to run UIs like the blueish object list in the UI thread.

    enhancement 
    opened by SaladDais 0
  • Generate typed message classes based on message template

    Generate typed message classes based on message template

    These should be generated with a script and put in the repo so people can take advantage of their IDE's autocomplete writing messages. The classes should all inherit from the base Message class so un-typed access via __getitem__ is still possible and should only be a thin, typed wrapper around that API. Same thing for any Block subclasses, need to think about how specifying the unpacked form of a var's value should work for those (maybe a special wrapper class for packed values so isinstance() can be used to detect it rather than the _ suffix for names being used right now?)

    enhancement 
    opened by SaladDais 1
  • Implement (De)serialization for LayerData packets

    Implement (De)serialization for LayerData packets

    Needed to get details about terrain, water, wind, etc. It's a nightmare format based on DCT blocks. Probably need an actual bitstream implementation to read them.

    enhancement help wanted 
    opened by SaladDais 0
Releases(v0.12.2)
Owner
Salad Dais
Code as craft
Salad Dais
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
Using Python to derive insights on particular Pokemon, Types, Generations, and Stats

Pokémon Analysis Andreas Nikolaidis February 2022 Introduction Exploratory Analysis Correlations & Descriptive Statistics Principal Component Analysis

Andreas 1 Feb 18, 2022
The repo for mlbtradetrees.com. Analyze any trade in baseball history!

The repo for mlbtradetrees.com. Analyze any trade in baseball history!

7 Nov 20, 2022
A pipeline that creates consensus sequences from a Nanopore reads. I

A pipeline that creates consensus sequences from a Nanopore reads. It clusters reads that are similar to each other and creates a consensus that is then identified using BLAST.

Ada Madejska 2 May 15, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022
A neural-based binary analysis tool

A neural-based binary analysis tool Introduction This directory contains the demo of a neural-based binary analysis tool. We test the framework using

Facebook Research 208 Dec 22, 2022
Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Yongxian (Caroline) Lun 1 Dec 27, 2021
Pyspark project that able to do joins on the spark data frames.

SPARK JOINS This project is to perform inner, all outer joins and semi joins. create_df.py: load_data.py : helps to put data into Spark data frames. d

Joshua 1 Dec 14, 2021
Data collection, enhancement, and metrics calculation.

l3_data_collection Data collection, enhancement, and metrics calculation. Summary Repository containing code for QuantDAO's JDT data collection task.

Ruiwyn 3 Dec 23, 2022
Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks

The following Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks (MOFs). The training set is extracted from the Cambridge S

1 Jan 09, 2022
Candlestick Pattern Recognition with Python and TA-Lib

Candlestick-Pattern-Recognition-with-Python-and-TA-Lib Goal Look at the S&P500 to try and get a better understanding of these candlestick patterns and

Ganesh Jainarain 11 Oct 07, 2022
Flexible HDF5 saving/loading and other data science tools from the University of Chicago

deepdish Flexible HDF5 saving/loading and other data science tools from the University of Chicago. This repository also host a Deep Learning blog: htt

UChicago - Department of Computer Science 255 Dec 10, 2022
Analysiscsv.py for extracting analysis and exporting as CSV

wcc_analysis Lichess page documentation: https://lichess.org/page/world-championships Each WCC has a study, studies are fetched using: https://lichess

32 Apr 25, 2022
Python tools for querying and manipulating BIDS datasets.

PyBIDS is a Python library to centralize interactions with datasets conforming BIDS (Brain Imaging Data Structure) format.

Brain Imaging Data Structure 180 Dec 18, 2022
InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family.

CRISPRanalysis InDels analysis of CRISPR lines by NGS amplicon sequencing technology for a multicopy gene family. In this work, we present a workflow

2 Jan 31, 2022
A real data analysis and modeling project - restaurant inspections

A real data analysis and modeling project - restaurant inspections Jafar Pourbemany 9/27/2021 This project represents data analysis and modeling of re

Jafar Pourbemany 2 Aug 21, 2022
Basis Set Format Converter

Basis Set Format Converter Repository for the online tool that allows you to enter a basis set in the form of text input for a variety of Quantum Chem

Manas Sharma 3 Jun 27, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
An Integrated Experimental Platform for time series data anomaly detection.

Curve Sorry to tell contributors and users. We decided to archive the project temporarily due to the employee work plan of collaborators. There are no

Baidu 486 Dec 21, 2022