Skip to main content

Hello Team,

 

I am trying to get a record by searching through the custom fields I have added. I get table records using default “title” field as below. But I have added custom field for e.g. SourceID and need to get table record using that field, how can I do that?

query 


  table_records(search: {title : "Test"}, table_id: "<TABLE ID>")
   { 
        edges 
        { 
            cursor node 
            { 
                id title url 
          } 
      } 
    pageInfo 
    { 
      endCursor hasNextPage hasPreviousPage startCursor 
    } 
  }
}

Hi @abhatt,

 

Unfortunately, there is still no way for you to pass this parameter in the query.

The workaround is to search all records and filter in sequence according to your needs.

 


Thanks Lucas, how can we search all records and filter by custom field please? I am new in here, if you can point me to any article too that demonstrates this, it would be of great help!


The way I do this is via Python.

Below is the code I use for extraction (Goole Coolab format):

Direct notebook link: https://colab.research.google.com/drive/1iscCAFMzHw7B9uM4yuslIkealqlvaINI?usp=sharing

Code:

import boto3
import requests
import pandas as pd
import json

#PAYLOAD START

table_id = "X"
pipefy_token = 'X'
table_fields = ""]
name = 'X'
url = "https://api.pipefy.com/graphql"

headers = {
"authorization": f"Bearer {pipefy_token}",
"content-type": "application/json"
}

df = pd.DataFrame(columns=table_fields)

has_next_page = True
first_query = True

while(has_next_page):

if first_query:
payload = {"query": "{ table_records(table_id:\""+table_id+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}
first_query = False
else:
payload = {"query": "{ table_records(table_id:\""+table_id+"\",after:\""+end_cursor+"\") {edges {node {id title record_fields {name value}}} pageInfo {endCursor hasNextPage}}}"}

response = requests.request("POST", url, json=payload, headers=headers)

json_data = json.loads(response.text)
end_cursor = json_dataa"data"]]"table_records"]]"pageInfo"]]"endCursor"]
has_next_page = json_dataa"data"]]"table_records"]]"pageInfo"]]"hasNextPage"]
total_records_pg = len(json_dataa"data"]]"table_records"]]"edges"])

for i in range(total_records_pg):
card_title = json_dataa"data"]]"table_records"]]"edges"]]i]]"node"]]"title"]
card_data_d = json_dataa"data"]]"table_records"]]"edges"]]i]]"node"]]"record_fields"]
card_data = {xx'name']:xx'value'] for x in card_data_d}
df = df.append(card_data, ignore_index=True)

#PAYLOAD END

#CHANGING THE NAMES OF THE COLUMNS TO SAVE IN AWS ​​(REMOVING SPECIAL CHARACTERS)
df.columns = df.columns.str.replace(' ', '_')
df.columns = df.columns.str.replace('ã', 'a')
df.columns = df.columns.str.replace('Á', 'A')
df.columns = df.columns.str.replace('é', 'e')
df.columns = df.columns.str.replace('ê', 'e')
df.columns = df.columns.str.replace('á', 'a')
df.columns = df.columns.str.replace('ç', 'c')
df.columns = df.columns.str.replace('í', 'i')
df.columns = df.columns.str.replace('ú', 'u')
df.columns = df.columns.str.replace('õ', 'o')
df.columns = df.columns.str.replace('ó', 'o')
df.columns = df.columns.str.replace('õ', 'o')
df.columns = df.columns.str.replace('.', '')

#SAVE CSV FILE ON AWS S3
filename = "/tmp/mydata.csv"
df.to_csv(filename, header=True)

#AWS CLIENT
client = boto3.client(
's3',
aws_access_key_id= 'X',
aws_secret_access_key='X'
)

client.upload_file(filename,'X',f'{nome_arquivo}.csv')

 


Thanks Lucas, much appreciated!


Reply