Friday, August 2, 2019

Facebook Ad Objects and Insights - API Consumption


The purpose of this tool is to automate the process of complex data pull. All params and credentials are automatically downloaded from a pre-defined S3 buckets.

Features of this tool are --
  • User can customize the what needs to be pulled just by changing / editing json file in S3 location
  • User can add data pull just by adding parameters in a json file
  • Credentials folder location and
  • data dropped location can be changed without developer's intervention
  • Users will be able to change the FB App's credentials when it expires

Project structure...

fb-api-project(root)
|   main_file.py
│   README.md 
|   requirement.txt  
│   .gitignore 
│
└─── credentils
│   │   __init__.py
│   │   adobjectsfields.json  # This file holds the fields. 
|   |                         # This file will be replaced by downloading from S3
|   |   credentials.json      # This file holds the credentials of both S3 and FB App. 
|                               This file will be replaced by downloading from S3
└─── func
|   │   __init__.py
|   │   func.py               # Blank file. Will be removed / updated in future
|
└─── params
|   |   __init__.py
|   |   fieldlist.json        # For AdObjects
|   |   params.json           # For Insight data
|   
└─── processing
|   |   __init__.py
|   |   collectparams.py 
|   |   run.py
|
└─── settings
|   |   __init__.py
|   |   settings.py 
|   
└─── utils
|   |   __init__.py
|   |   getData.py 
|   |   s3FuncTools.py
|   |   version.py            # Will be removed in future
 

Sample code...

from processing.run import RunProcess

r = RunProcess()
r.get_adobject_data(ad_object='campaign')
r.get_insights(saveto='data', data_limit=100)

# File(s) will be stored in root/data directory

Friday, July 26, 2019

Shapley Value Regression in Python

Shapley Value Regression

Introduction:

In the econometric literature multicollinearity is defined as the incidence of high degree of correlation among some or all regressor variables. Strong multicollinearity has deleterious effects on the confidence intervals of linear regression coefficients (β in the linear regression model y=Xβ+u). Although it does not affect the explanatory power () of the regressors or unbiasedness of the estimated coefficients associated with them, it does inflate their standard error of estimate rendering test of hypothesis misleading or paradoxical, often such that although  could be very high, individual coefficients may all have poor Student’s tvalues. Thus, strong multicollinearity may lead to failure in rejecting a false null hypothesis of ineffectiveness of the regressor variable to the regressand variable (type II error). Very frequently, it also affects the sign of the regression coefficients. However, it has been pointed out that the incidence of high degree of correlation (measured in terms of a large condition number; Belsley et al., 1980) among some or all regressor variables alone (unsupported by large variance of error in the regressand variable, y) has little effect on the precision of regression coefficients. Large condition number coupled with a large variance of error in the regressand variable destabilizes the regression estimator; either of the two in isolation cannot cause much harm, although the condition number is relatively more potent in determining the stability of estimated regression coefficients (Mishra, 2004-a).

Shapley value regression:

This is an entirely different strategy to assess the contribution of regressor variables to the regressand variable. It owes its origin in the theory of cooperative games (Shapley, 1953). The value of  obtained by fitting a linear regression model y=Xβ+u is considered as the value of a cooperative game played by X (whose members, xj ϵ X; j=1, m, work in a coalition) against y (explaining it). The analyst does not have enough information to disentangle the contributions made by the individual members xj ϵ X; j=1, m, but only their joint 1 contribution () is known. The Shapley value decomposition imputes the most likely contribution of each individual xj ϵ X; j=1, m, to .

An algorithm to impute the contribution of individual variables to Shapley value:

Let there be m number of regressor variables in the model y=Xβ+u. Let X(p, r) be the r-membered subset of X in which the pth regressor appears and X(q, r) be the r-membered subset of X in which the pth regressor does not appear. Further, let  be the  obtained by regression of y on X(p, r) and  be the  obtained by regression of y on X(q, r). Then, the share of the regressor variable p (that is xp ϵ X) is given by . Moreover,  Here k is the number of cases in which the evaluation in [.] was carried out. The sum of all S(p) for p=1, m (that is,  is the  of y=Xβ+u : (all xj ϵ X) or the total value of the game = 

Computational details of share of  in :


rr-1x1x2x3x4KoperationvaluesSum/kGrand value
412340.98237plus+0.98237
32340.97282minus-0.97282
k=1Sum/k0.009556
31230.98228plus+0.98228
31240.98233plus+0.98233
31340.98128plus+0.98128
2230.84702minus-0.84702
2230.68006minus-0.68006
2230.93529minus-0.93529
k=3Sum/k0.161175
2120.97867plus+0.97867
2130.54816plus+0.54816
2140.97247plus+0.97247
120.66626minus-0.66626
130.28587minus-0.28587
140.67454minus-0.67454
k=3Sum/k0.290878
110.53394plus+0.53394
k=1Sum/k0.533948
Sum(sum/k)/m0.248889

Wednesday, February 21, 2018

Sorting a list based on values from another list


Let’s consider the following two lists…

X = ["a", "b", "c", "d", "e", "f", "g", "h", "i"]
Y = [ 0,   1,   1,    0,   1,   2,   2,   0,   1]

Now we would like to sort the list X based on the list Y. The final sorted list X would look like the following…

["a", "d", "h", "b", "c", "e", "i", "f", "g"]

Solution…
  1. ·  zip the two lists.
  2. ·  create a new, sorted list based on the zip using sorted().
  3. ·  using a list comprehension extract the first elements of each pair from the sorted, zipped list.


[x for _, x in sorted(zip(Y,X), key=lambda pair: pair[0])]