Skip to content

Python library utilising power of Dask for working with Kilts - Nielsen Retail Scanner Data for academic research

License

Notifications You must be signed in to change notification settings

pratikrelekar/NielsenDSRS

Repository files navigation

NielsenIQ Retail Reader

License License
Dependencies PyPI - Version | PyPI - Version | PyPI - Version | PyPI - Version | PyPI - Version
Meta PyPI - Version

Overview:

NielsenIQ Retail Reader is a first even special-purpose library and it's main purpose is to facilitate ease of processing of NielsenIQ Retail Scanner data of Kilt’s Center’s Nielsen IQ data used for Academic research only. The striking feature of this library is Dask which acts as an underlying framework that uniquely empowers the user to read NielsenIQ data with limited on device resources (by processing larger-than-memory data in chunks and leverages distributed computing). It follows the Kilts/NielsenIQ directory structure.

Data:

Information about the Retail Scanner data can be found here: Kilts Center for Marketing

IMPORTANT:

Access to NielsenIQ Retail Data:

Please note that NielsenIQ Retail data is proprietary and access is restricted to individuals whose institutions have an existing subscription or agreement with NielsenIQ. If you intend to use this library for accessing and analyzing NielsenIQ data, you must first ensure that you are authorized to do so by your institution. Unauthorized access or use of this data may violate terms of use and could have legal implications. NielsenIQ dataset should strictly follow standard naming convention as per laid out by NielsenIQ and Kilts Center of Marketing and under no circumstances the naming convention should be changed.

NielsenIQRetail processes Retail Scanner Data.

Table of Contents

Main Features

Here are just a few of the things that NielsenIQRetail does well:

  • Efficiently manages NielsenIQ directory and hierarchy, simplifying the process for researchers and significantly reducing the time needed to navigate through NielsenIQ documentation.
  • Size mutability: Processes dataframes larger-than-memory on a single machine through batch processing.
  • Distributed computing for terabyte sized datasets enhancing the overall data reading speed by utlising low-latency feature of Dask.
  • Provides simple yet distinct commands for separating sales, stores, and products data for analysis purposes.
  • This package has excellent compatibility with Numpy, Pandas etc.

Where to get it

The source code is currently hosted on GitHub at: https://github.com/pratikrelekar/NielsenDSRS

Binary installers is available at Python Package Index (PyPI)

For PyPI install:

pip install NielsenIQRetail

For latest development release:

pip install git+https://github.com/pratikrelekar/NielsenDSRS

For pip install requirements:

python -m pip install -r requirements.txt

Dependencies

Before using NielsenIQRetail, ensure that all dependencies are correctly installed. Additionally, verify that the Client hosting the Python environment, the Scheduler, and the Worker nodes all have the same version installed.

How to use

For Local on system memory:

from dask.distributed import Client

# # Calculate memory per worker based on total system memory
total_memory_gb = SYSTEM_RAM  # Your system's total RAM in GB (Edit as per system memory)
n_workers = WORKERS         # Number of cores on system (Edit the total workers you want)
memory_per_worker_gb = int(total_memory_gb / n_workers)  # Memory per worker in GB

# Start the client with given specifications
client = Client(n_workers=n_workers, threads_per_worker=1, 
                memory_limit=f'{memory_per_worker_gb}GB')
print(client)

To utilize the power of the Dask, using auxilary memory cluster for large data processing

# you can only connect to the cluster from inside Python client environment
from dask.distributed import Client
client = Client('dask-scheduler.default.svc.cluster.local:address') #Replace the address with your actual address of the memory cluster
client

Debug

Make sure the NielsenDSRS module and all the dependencies are installed on Dask Client, Scheduler and Worker nodes. The versions should match on all. Following is the code to debug the errors related to the version mismatch:

For worker nodes:

def check_module():
    try:
        import NielsenDSRS
        return "Installed"
    except ImportError:
        return "Not Installed"

# Run the check across all workers
results = client.run(check_module)
for worker, result in results.items():
    print(f"{worker}: {result}")

For Scheduler:

scheduler_result = client.run_on_scheduler(check_module)
print(f"Scheduler: {scheduler_result}")

For Client:

try:
    import NielsenDSRS
    print("NielsenDSRS is installed on the client.")
except ImportError:
    print("NielsenDSRS is not installed on the client.")

If there is a mismatch or if the NielsenIQRetail is not correctly installed, follow these steps:

# Function to install NielsenDSRS
def install_nielsendsrs():
    import subprocess
    subprocess.check_call(["pip", "install", "NielsenIQRetail"])

# Install on all workers
client.run(install_nielsendsrs)

# Install on the scheduler
client.run_on_scheduler(install_nielsendsrs)

License

MIT License

Background

This library was developed at Data Science Research Services(University of Illinois at Urbana-Champaign) in 2024 and has been under active development since then. Currently supports NielsenIQ Retail Scanner data from 2006 to 2020.

Getting help

For general questions and discussions, visit DSRS mailing list.

About

Python library utilising power of Dask for working with Kilts - Nielsen Retail Scanner Data for academic research

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •