Skip to main content
Solved

GraphQL Get Table Record by custom fields

  • February 7, 2022
  • 4 replies
  • 546 views

abhatt

Hello Team,

 

I am trying to get a record by searching through the custom fields I have added. I get table records using default “title” field as below. But I have added custom field for e.g. SourceID and need to get table record using that field, how can I do that?

query 


  table_records(search: {title : "Test"}, table_id: "<TABLE ID>")
   { 
        edges 
        { 
            cursor node 
            { 
                id title url 
          } 
      } 
    pageInfo 
    { 
      endCursor hasNextPage hasPreviousPage startCursor 
    } 
  }
}

Best answer by Lucas Democh

The way I do this is via Python.

Below is the code I use for extraction (Goole Coolab format):

Direct notebook link: https://colab.research.google.com/drive/1iscCAFMzHw7B9uM4yuslIkealqlvaINI?usp=sharing

Code:

import boto3
import requests
import pandas as pd
import json

#PAYLOAD START

table_id = "X"
pipefy_token = 'X'
table_fields = [""]
name = 'X'
url = "https://api.pipefy.com/graphql"

headers = {
    "authorization": f"Bearer {pipefy_token}",
    "content-type": "application/json"
}

df = pd.DataFrame(columns=table_fields)

has_next_page = True
first_query = True

while(has_next_page):

    if first_query:
        payload = {"query": "{  table_records(table_id:\""+table_id+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}
        first_query = False
    else:
        payload = {"query": "{  table_records(table_id:\""+table_id+"\",after:\""+end_cursor+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}

    response = requests.request("POST", url, json=payload, headers=headers)

    json_data = json.loads(response.text)
    end_cursor = json_data["data"]["table_records"]["pageInfo"]["endCursor"]
    has_next_page = json_data["data"]["table_records"]["pageInfo"]["hasNextPage"]
    total_records_pg = len(json_data["data"]["table_records"]["edges"])

    for i in range(total_records_pg):
        card_title = json_data["data"]["table_records"]["edges"][i]["node"]["title"]
        card_data_d = json_data["data"]["table_records"]["edges"][i]["node"]["record_fields"]
        card_data = {x['name']:x['value'] for x in card_data_d}
        df = df.append(card_data, ignore_index=True)

#PAYLOAD END

#CHANGING THE NAMES OF THE COLUMNS TO SAVE IN AWS ​​(REMOVING SPECIAL CHARACTERS)
df.columns = df.columns.str.replace(' ', '_')
df.columns = df.columns.str.replace('ã', 'a')
df.columns = df.columns.str.replace('Á', 'A')
df.columns = df.columns.str.replace('é', 'e')
df.columns  = df.columns.str.replace('ê', 'e')
df.columns  = df.columns.str.replace('á', 'a')
df.columns  = df.columns.str.replace('ç', 'c')   
df.columns  = df.columns.str.replace('í', 'i')
df.columns  = df.columns.str.replace('ú', 'u') 
df.columns  = df.columns.str.replace('õ', 'o') 
df.columns  = df.columns.str.replace('ó', 'o')
df.columns  = df.columns.str.replace('õ', 'o')
df.columns  = df.columns.str.replace('.', '')

#SAVE CSV FILE ON AWS S3
filename = "/tmp/mydata.csv"
df.to_csv(filename, header=True)

#AWS CLIENT
client = boto3.client(
            's3',
            aws_access_key_id= 'X',
            aws_secret_access_key='X'
        )

client.upload_file(filename,'X',f'{nome_arquivo}.csv')

 

View original
Did this topic help you find an answer to your question?

4 replies

Lucas Democh
Forum|alt.badge.img+15
  • Pipefy Legend
  • 396 replies
  • February 7, 2022

Hi @abhatt,

 

Unfortunately, there is still no way for you to pass this parameter in the query.

The workaround is to search all records and filter in sequence according to your needs.

 


abhatt
  • Author
  • Inspiring
  • 12 replies
  • February 7, 2022

Thanks Lucas, how can we search all records and filter by custom field please? I am new in here, if you can point me to any article too that demonstrates this, it would be of great help!


Lucas Democh
Forum|alt.badge.img+15
  • Pipefy Legend
  • 396 replies
  • Answer
  • February 8, 2022

The way I do this is via Python.

Below is the code I use for extraction (Goole Coolab format):

Direct notebook link: https://colab.research.google.com/drive/1iscCAFMzHw7B9uM4yuslIkealqlvaINI?usp=sharing

Code:

import boto3
import requests
import pandas as pd
import json

#PAYLOAD START

table_id = "X"
pipefy_token = 'X'
table_fields = [""]
name = 'X'
url = "https://api.pipefy.com/graphql"

headers = {
    "authorization": f"Bearer {pipefy_token}",
    "content-type": "application/json"
}

df = pd.DataFrame(columns=table_fields)

has_next_page = True
first_query = True

while(has_next_page):

    if first_query:
        payload = {"query": "{  table_records(table_id:\""+table_id+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}
        first_query = False
    else:
        payload = {"query": "{  table_records(table_id:\""+table_id+"\",after:\""+end_cursor+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}

    response = requests.request("POST", url, json=payload, headers=headers)

    json_data = json.loads(response.text)
    end_cursor = json_data["data"]["table_records"]["pageInfo"]["endCursor"]
    has_next_page = json_data["data"]["table_records"]["pageInfo"]["hasNextPage"]
    total_records_pg = len(json_data["data"]["table_records"]["edges"])

    for i in range(total_records_pg):
        card_title = json_data["data"]["table_records"]["edges"][i]["node"]["title"]
        card_data_d = json_data["data"]["table_records"]["edges"][i]["node"]["record_fields"]
        card_data = {x['name']:x['value'] for x in card_data_d}
        df = df.append(card_data, ignore_index=True)

#PAYLOAD END

#CHANGING THE NAMES OF THE COLUMNS TO SAVE IN AWS ​​(REMOVING SPECIAL CHARACTERS)
df.columns = df.columns.str.replace(' ', '_')
df.columns = df.columns.str.replace('ã', 'a')
df.columns = df.columns.str.replace('Á', 'A')
df.columns = df.columns.str.replace('é', 'e')
df.columns  = df.columns.str.replace('ê', 'e')
df.columns  = df.columns.str.replace('á', 'a')
df.columns  = df.columns.str.replace('ç', 'c')   
df.columns  = df.columns.str.replace('í', 'i')
df.columns  = df.columns.str.replace('ú', 'u') 
df.columns  = df.columns.str.replace('õ', 'o') 
df.columns  = df.columns.str.replace('ó', 'o')
df.columns  = df.columns.str.replace('õ', 'o')
df.columns  = df.columns.str.replace('.', '')

#SAVE CSV FILE ON AWS S3
filename = "/tmp/mydata.csv"
df.to_csv(filename, header=True)

#AWS CLIENT
client = boto3.client(
            's3',
            aws_access_key_id= 'X',
            aws_secret_access_key='X'
        )

client.upload_file(filename,'X',f'{nome_arquivo}.csv')

 


abhatt
  • Author
  • Inspiring
  • 12 replies
  • February 20, 2022

Thanks Lucas, much appreciated!


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings